Can I skip eva's assertion on signed overflow? - frama-c

Sample code:
void main(){
unsigned int x;
x = 1U << 31; // OK
x = 1 << 31; // Sign overflowed
return;
}
frama-c-gui -eva main.c:
void main(void)
{
unsigned int x;
x = 1U << 31;
/*# assert Eva: signed_overflow: 1 << 31 ≤ 2147483647; */
x = (unsigned int)(1 << 31);
return;
}
Get red alarm because of signed overflow on line 4. I have existing code with ton of hardware registers defined with mask bits and shifting bits like this. It's unreasonable to modify the code add "U" for all the mask bits. Is there a option in eval plugin to treat these constants as unsigned integer?

There are some options in the kernel to control which kinds of alarms should be emitted (see frama-c -kernel-h or the manual, especially its section 6.3, for more information).
In your particular case, you are probably interested in -no-warn-signed-overflow, that will disable alarms related to overflows on signed arithmetic. Eva will then assume 2-complement arithmetic, and emit a warning about that if the situation occurs, but only once for the whole analysis.

Related

How to find magic multipliers for divisions by constant on a GPU?

I was looking at implementing the following computation, where divisor is nonzero and not a power of two
unsigned multiplier(unsigned divisor)
{
unsigned shift = 31 - clz(divisor);
uint64_t t = 1ull << (32 + shift);
return t / div;
}
in a manner that is efficient for processors that lack 64-bit integer and floating-point instructions, but may have 32-bit fused multiply-add (such as GPUs, which also will lack division).
This calculation is useful for finding "magic multipliers" involved in optimizing division, when the divisor is known ahead of time, to a multiply-high instruction followed by a bitwise shift. Unlike code used in compilers and reference code in libdivide, it finds largest such multiplier.
One additional twist is that in the application I was looking at, I anticipated that divisor will almost always be representable in float type. Therefore, it would make sense to have an efficient "fast path" that will handle those divisors, and a size-optimized "slow path" that would handle the rest.
The solution I came up with performs long division with remainder that is specialized for this particular scenario (dividend is a power of two) in 6 or 8 FMA operations on the "fast path", and then performs a binary search with 8 iterations on the "slow path".
The following program performs exhaustive testing of the proposed solution (needs about 1-2 minutes on an FMA-capable CPU).
#include <math.h>
#include <stdint.h>
#include <stdio.h>
struct quomod {
unsigned long quo;
unsigned long mod;
};
// Divide 1 << (32 + SHIFT) by DIV, return quotient and modulus
struct quomod
quomod_ref(unsigned div, unsigned shift)
{
uint64_t t = 1ull << (32 + shift);
return (struct quomod){t / div, t % div};
}
// Reinterpret given bits as float
static inline float int_as_float(uint32_t bits)
{
return (union{ unsigned b; float f; }){bits}.f;
}
// F contains integral value in range [-2**32 .. 2**32]. Convert it to integer,
// with wrap-around on overflow. If the GPU implements saturating conversion,
// it also may be used
static inline uint32_t cvt_f32_u32_wrap(float f)
{
return (uint32_t)(long long)f;
}
struct quomod
quomod_alt(unsigned div, unsigned shift)
{
// t = float(1ull << (32 + shift))
float t = int_as_float(0x4f800000 + (shift << 23));
// mask with max(0, shift - 23) low bits zero
uint32_t mask = (int)(~0u << shift) >> 23;
// No roundoff in conversion
float div_f = div & mask;
// Caution: on the CPU this is correctly rounded, but on the GPU
// native reciprocal may be off by a few ULP, in which case a
// refinement step may be necessary:
// recip = fmaf(fmaf(recip, -div_f, 1), recip, recip)
float recip = 1.f / div_f;
// Higher part of the quotient, integer in range 2^31 .. 2^32
float quo_hi = t * recip;
// No roundoff
float res = fmaf(quo_hi, -div_f, t);
float quo_lo_approx = res * recip;
float res2 = fmaf(quo_lo_approx, -div_f, res);
// Lower part of the quotient, may be negative
float quo_lo = floorf(fmaf(res2, recip, quo_lo_approx));
// Remaining part of the dividend
float mod_f = fmaf(quo_lo, -div_f, res);
// Quotient as sum of parts
unsigned quo = cvt_f32_u32_wrap(quo_hi) + (int)quo_lo;
// Adjust quotient down if remainder is negative
if (mod_f < 0) {
quo--;
}
if (div & ~mask) {
// The quotient was computed for a truncated divisor, so
// it matches or exceeds the true result
// High part of the dividend
uint32_t ref_hi = 1u << shift;
// Unless quotient is zero after wraparound, increment it so
// it's higher than true quotient (its high bit must be 1)
quo -= (int)quo >> 31;
// Binary search for the true quotient; search invariant:
// quo is higher than true quotient, quo-2*bit is lower
for (unsigned bit = 256; bit; bit >>= 1) {
unsigned try = quo - bit;
// One multiply-high instruction
uint32_t prod_hi = 1ull * try * div >> 32;
if (prod_hi >= ref_hi)
quo = try;
}
// quo is zero or exceeds the true quotient, so quo-1 must be it
quo--;
}
// Use the "left-pointing short magic wand" operator
// to recover the remainder
return (struct quomod){quo, quo *- div};
}
int main()
{
fprintf(stderr, "%66c\r[", ']');
unsigned step = 1;
for (unsigned div = 3; div; div += step) {
// Progress bar
if (!(div & 0x03ffffff)) fprintf(stderr, "=");
// Skip powers of two
if (!(div & (div-1))) continue;
unsigned shift = 31 - __builtin_clz(div);
struct quomod ref = quomod_ref(div, shift);
struct quomod alt = quomod_alt(div, shift);
if (ref.quo != alt.quo || ref.mod != alt.mod) {
printf("\nerror at %u\n", div);
return 1;
}
}
fprintf(stderr, "=\nAll ok\n");
return 0;
}

Serial Communication Between Arduino and EPOS: CRC Calculation Problems

I am trying to interface with an EPOS2 motor controller over RS232 Serial with an Arduino Duemilanove (because it's what I had lying around). I got it to work for the most part - I can send and recieve data when I manually calculate the CRC checksum - but I'm trying to dynamically control the velocity of the motor which requires changing data, and therefore, changing checksum. The documentation for calculating the checksum is here, on page 24:
http://www.maxonmotorusa.com/medias/sys_master/8806425067550/EPOS2-Communication-Guide-En.pdf
I copied the code directly out of this documentation, and integrated it into my code, and it does not calculate the checksum correctly. Below is a shortened version of my full sketch (tested, yielding 0x527C). The weirdest part is that it calculates a different value in my full sketch than in the one below, but both are wrong. Is there something obvious that I'm missing?
byte comms[6] = { 0x10, 0x01, 0x03, 0x20, 0x01, 0x02 }; // CRC should be 0xA888
void setup() {
Serial.begin(115200);
}
void loop() {
calcCRC(comms, 6, true);
while(1);
}
word calcCRC(byte *comms, int commsSize, boolean talkative) {
int warraySize = commsSize / 2 + commsSize % 2;
word warray[warraySize];
warray[0] = comms[0] << 8 | comms[1];
Serial.println(warray[0], HEX);
for (int i = 1; i <= warraySize - 1; i++) {
warray[i] = comms[i * 2 + 1] << 8 | comms[i * 2];
Serial.println(warray[i], HEX);
}
word* warrayP = warray;
word shifter, c;
word carry;
word CRC = 0;
//Calculate pDataArray Word by Word
while (commsSize--)
{
shifter = 0x8000;
c = *warrayP ++;
do {
carry = CRC & 0x8000;
CRC <<= 1;
if (c & shifter) CRC++;
if (carry) CRC ^= 0x1021;
shifter >>= 1;
} while (shifter);
}
if (talkative) {
Serial.print("the CRC for this data is ");
Serial.println(CRC, HEX);
}
return CRC;
}
I used the link below to calculate the checksum that works for this data:
https://www.ghsi.de/CRC/index.php?Polynom=10001000000100001&Message=1001+2003+0201
Thanks so much!!
Where to begin.
First off, you are using commsSize-- for your loop, which will go through six times when you have only three words in the warray. So you are doing an out-of-bounds access of warray, and will necessarily get a random result (or crash).
Second, the build of your first word is backwards from your other builds. Your online CRC suffers the same problem, so you apparently don't even have a reliable test case.
Third (not an issue for the test case), if you have an odd number of bytes of input, you are doing an out-of-bounds access of comms to fill out the last word. And you are running the CRC bits too many times, unless the specification directs some sort of padding in that case. (Your documentation link is broken so I can't see what's supposed to happen.) Even then, you are using random data for the padding instead of zeros.
The whole word conversion thing is a waste of time anyway. You can just do it a byte at a time, given the proper ordering of the bytes. That also avoids the odd-number-of-bytes problem. This will produce the 0xa888 from the input you gave the online CRC calculator (which are your bytes in a messed up order, but exactly as you gave them to the calculator):
unsigned char dat[6] = { 0x10, 0x01, 0x20, 0x03, 0x02, 0x01 };
unsigned crc1021(unsigned char *dat, int len) {
unsigned crc = 0;
while (len) {
crc ^= *dat++ << 8;
for (int k = 0; k < 8; k++)
crc = crc & 0x8000 ? (crc << 1) ^ 0x1021 : crc << 1;
len--;
}
return crc & 0xffff;
}

C++: OpenSSL, aes cfb encryption [duplicate]

I tried to implement a "very" simple encryption/decryption example. I need it for a project where I would like to encrypt some user information. I can't encrypt the whole database but only some fields in a table.
The database and most of the rest of the project works, except the encryption:
Here is a simplified version of it:
#include <openssl/aes.h>
#include <openssl/evp.h>
#include <iostream>
#include <string.h>
using namespace std;
int main()
{
/* ckey and ivec are the two 128-bits keys necessary to
en- and recrypt your data. Note that ckey can be
192 or 256 bits as well
*/
unsigned char ckey[] = "helloworldkey";
unsigned char ivec[] = "goodbyworldkey";
int bytes_read;
unsigned char indata[AES_BLOCK_SIZE];
unsigned char outdata[AES_BLOCK_SIZE];
unsigned char decryptdata[AES_BLOCK_SIZE];
/* data structure that contains the key itself */
AES_KEY keyEn;
/* set the encryption key */
AES_set_encrypt_key(ckey, 128, &keyEn);
/* set where on the 128 bit encrypted block to begin encryption*/
int num = 0;
strcpy( (char*)indata , "Hello World" );
bytes_read = sizeof(indata);
AES_cfb128_encrypt(indata, outdata, bytes_read, &keyEn, ivec, &num, AES_ENCRYPT);
cout << "original data:\t" << indata << endl;
cout << "encrypted data:\t" << outdata << endl;
AES_cfb128_encrypt(outdata, decryptdata, bytes_read, &keyEn, ivec, &num, AES_DECRYPT);
cout << "input data was:\t" << decryptdata << endl;
return 0;
}
But the output of "decrypted" data are some random characters, but they are the same after every execution of the code. outdata changes with every execution...
I tried to debug and search for a solution, but I couldn't find any solution for my problem.
Now my question, what is going wrong here? Or do I completely misunderstand the provided functions?
The problem is that AES_cfb128_encrypt modifies the ivec (it has to in order to allow for chaining). Your solution is to create a copy of the ivec and initialize it before each call to AES_cfb128_encrypt as follows:
const char ivecstr[AES_BLOCK_SIZE] = "goodbyworldkey\0";
unsigned char ivec[AES_BLOCK_SIZE];
memcpy( ivec , ivecstr, AES_BLOCK_SIZE);
Then repeat the memcpy before your second call to AES_cfb128_encrypt.
Note 1: Your initial vector was a byte too short, so I put an explicit additional \0 at the end of it. You should make sure all of your strings are of the correct length when copying or passing them.
Note 2: Any code which uses encryption should REALLY avoid using strcpy or any other copy of unchecked length. It's a hazard.

how to convert double between host and network byte order?

Could somebody tell me how to convert double precision into network byte ordering.
I tried
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
functions and they worked well but none of them does double (float) conversion because these types are different on every architecture. And through the XDR i found double-float precision format representations (http://en.wikipedia.org/wiki/Double_precision) but no byte ordering there.
So, I would much appreciate if somebody helps me out on this (C code would be great!).
NOTE: OS is Linux kernel (2.6.29), ARMv7 CPU architecture.
You could look at IEEE 754 at the interchanging formats of floating points.
But the key should be to define a network order, ex. 1. byte exponent and sign, bytes 2 to n as mantissa in msb order.
Then you can declare your functions
uint64_t htond(double hostdouble);
double ntohd(uint64_t netdouble);
The implementation only depends of your compiler/plattform.
The best should be to use some natural definition,
so you could use at the ARM-platform simple transformations.
EDIT:
From the comment
static void htond (double &x)
{
int *Double_Overlay;
int Holding_Buffer;
Double_Overlay = (int *) &x;
Holding_Buffer = Double_Overlay [0];
Double_Overlay [0] = htonl (Double_Overlay [1]);
Double_Overlay [1] = htonl (Holding_Buffer);
}
This could work, but obviously only if both platforms use the same coding schema for double and if int has the same size of long.
Btw. The way of returning the value is a bit odd.
But you could write a more stable version, like this (pseudo code)
void htond (const double hostDouble, uint8_t result[8])
{
result[0] = signOf(hostDouble);
result[1] = exponentOf(hostDouble);
result[2..7] = mantissaOf(hostDouble);
}
This might be hacky (the char* hack), but it works for me:
double Buffer::get8AsDouble(){
double little_endian = *(double*)this->cursor;
double big_endian;
int x = 0;
char *little_pointer = (char*)&little_endian;
char *big_pointer = (char*)&big_endian;
while( x < 8 ){
big_pointer[x] = little_pointer[7 - x];
++x;
}
return big_endian;
}
For brevity, I've not include the range guards. Though, you should include range guards when working at this level.

GCC pointer cast warning

I am wondering why GCC is giving me this warning:
test.h: In function TestRegister:
test.h:12577: warning: cast to pointer from integer of different size
Code:
#define Address 0x1234
int TestRegister(unsigned int BaseAddress)
{
unsigned int RegisterValue = 0;
RegisterValue = *((unsigned int *)(BaseAddress + Address)) ;
if((RegisterValue & 0xffffffff) != (0x0 << 0))
{
return(0);
}
else
{
return(1);
}
}
Probably because you're on a 64-bit platform, where pointers are 64-bit but ints are 32-bit.
Rule-of-thumb: Don't try to use integers to store addresses.
If you include <stdint.h> and if you compile for the C99 standard using gcc -Wall -std=c99 you could cast to and from intptr_t which is an integer type of the same size as pointers.
RegisterValue = *((unsigned int *)((intptr_t)(BaseAddress + Address))) ;
Among other things, you're assuming that a pointer will fit into an unsigned int, where C gives no such guarantee… there are a number of platforms in use today where this is untrue, apparently including yours.
A pointer to data can be stored in a (void*) or (type*) safely. Pointers can be added to (or subtracted to yield) a size_t or ssize_t. There's no guaranteed relationship between sizeof(int), sizeof(size_t), sizeof(ssize_t), and (void*) or (type*)…
(Also, in this case, there's no real point in initializing the var and overwriting it on the next line…)
Also unrelated, but you realise that != (0x0 << 0) → != 0 and can be omitted, since if (x) = if (x != 0) … ? Perhaps that's because this is cut down from a larger sample, but that entire routine could be presented as
int TestRegister (unsigned int* BaseAddress)
{ return ( (0xffffffff & *(BaseAddress + Address)) ? 0 : 1 ); }
(Edited: changed to unsigned int* as it seems far more likely he wants to skip through at int-sized offsets?)

Resources