I was trying to display a number: 2893604342.00. But, when i am displaying it it is displayed as: -2893604342.
Following is the code snippet ...
avg += int(totalData[i][col.dataField]);
I have even replaced it with Number, but it's still showing the same negative number.
Please let me know whether there is any problem with int or Number!
The maximum values are accessible through each numeric type's static properties:
Number.MAX_VALUE
uint.MAX_VALUE
int.MAX_VALUE
(Just trace 'em.)
integers in flash are 32 bits, so an unsigned int's max value is (2^32)-1, 0xffffff or 4294967295. a signed int's max positive value is (2^(32-1))-1 or 2147483647 (one of the bits is used for the sign). the Number type is 64 bits.
in order to guarantee space for your result, type the variable to Number and cast the result to Number (or not at all).
var avg : Number = 0;
...
avg += totalData[i][col.dataField] as Number;
The largest exact integral value is 2^53, Remember ActionScript is ECMA at heart. Look for the operator ToInt32 for more info on that.
Try casting it to a uint instead of an int
Related
1'hF or 1'h1?
I am guessing 1'hF is not correct, 4'hF is required because 1 bit is not enough to hold a hexadecimal?
in the 1'h1,1'h or 1'b or 1'd All said number of bits,as well as "4'h"show four binary system,so hexadecimal "f" convert binary is "1111" ,so we need use "4'hf" show this
The LRM section 5.7.1 Integer literal constants say this about any based literal constant
If the size of the unsigned number is larger than the size specified
for the literal constant, the unsigned number shall be truncated from
the left
Most tools also generate a warning if it has to truncate.
I'm trying to this very basic task in C, where I want to define a number of ints in a header file. I've done this like so:
#define MINUTE (60)
#define HOUR (60 * MINUTE)
#define DAY (24 * HOUR)
The problem is that while MINUTE and HOUR return the correct answer, DAY returns something weird.
Serial.println(MINUTE); // 60
Serial.println(HOUR); // 3600
Serial.println(DAY); // 20864
Can someone explain why this happens?
Assuming you have something like
int days = DAY;
or
unsigned days = DAY;
You seem to have 16 bit integers. The max. representable positive value for (signed) 2s complement integers with 16 bits is 32767, for unsigned it is 65535.
So , as 24 * 3600 == 86400, you invoke undefined behaviour for the signed int and wrap for the unsigned (the int will likely wrap, too, but that is not guaranteed).
This results in 86400 modulo 65356 (which is 2 to the power of 16) which happens to be 20864.
Solution: use stdint.h types: uint32_t or int32_t to get defined sized integers.
Edit: Using function arguments follows basically the same principle as the initialisers above.
Update: As you clamed, when directly passing the integr constant 86400 to the function, this will have type long, because the compiler will automatically choose the smallest type which can hold the values. It is very likely that the println methods are overloaded for long arguments, so they will print the correct value.
However, for the expression the original types are relevant. And all values 24, 60, 60 will have int type, so the result will also be int. The compiler will not use a larger type, just because the result might overflow. Use 24L and you will get a long result for the macros, too.
It looks like you actually managed to dig up an ancient 16 bit compiler (where did you find it? ) Otherwise I'd like to see the code that produces these numbers.
20864 = 86400 % 65536
Try storing the value in an int instead of a short.
I have a 16-bits sample between -32768 and 32767.
To save space I want to convert it to a 8-bits sample, so I divide the sample by 256, and add 128.
-32768 / 256 = -128 + 128 = 0
32767 / 256 = 127.99 + 128 = 255.99
Now, the 0 will fit perfectly in a byte, but the 255.99 has to be rounded down to 255, causing me to loose precision, because when converting back I'll get 32512 instead of 32767.
How can I do this, without loosing the original min/max values? I know I make a very obvious thought error, but I cant figure out where the mistake lies.
And yes, ofcourse I'm fully aware I lost precision by dividing, and will not be able to deduce the original values from the 8-bit samples, but I just wonder why I don't get the original maximum.
The answers for down-sampling have already been provided.
This answer relates to up-sampling using the full range. Here is a C99 snippet demonstrating how you can spread the error across the full range of your values:
#include <stdio.h>
int main(void)
{
for( int i = 0; i < 256; i++ ) {
unsigned short scaledVal = ((unsigned short)i << 8) + (unsigned short)i;
printf( "%8d%8hu\n", i, scaledVal );
}
return 0;
}
It's quite simple. You shift the value left by 8 and then add the original value back. That means every increase by 1 in the [0,255] range corresponds to an increase by 257 in the [0,65535] range.
I would like to point out that this might give worse results than you began with. For example, if you downsampled 65280 (0xff00) you would get 255, but then upsampling that would give 65535 (0xffff), which is a total error of 255. You will have similarly large errors across most of the higher end of your data range.
You might do better to abandon the notion of going back to the [0,65535] range, and instead round your values by half. That is, shift left and add 127. This means the error is uniform instead of skewed. Because you don't actually know what the original value was, the best you can do is estimate it with a value right in the centre.
To summarize, I think this is more mathematically correct:
unsigned short scaledVal = ((unsigned short)i << 8) + 127;
You don't get the original maximum because you can't represent the number 256 as an 8-bit unsigned integer.
if you're trying to compress your 16 bit integer value into a 8 bit integer value range, you take the most significant 8 bits and keep them while throwing out the least significant 8 bits. Normally this is accomplished by shifting the bits. A >> operator is a shift from most to least significant bits which would work if used 8 times or >>8. You can also just mask out the bytes and divide off the 00s doing your rounding before your division, with something like 8BitInt = (16BitInt & 65280)/256; [65280 a.k.a 0xFF00]
Every bit you shift off of a value halves it, like division by 2, and rounds down.
All of the above is complicated some by the fact that you're dealing with a signed integer.
Finally I'm not 100% certain I got everything right here because really, I haven't tried doing this.
my problem why my program takes much large time to execute, this program is supposed to check the user password, the approach used is
take password form console in to array and
compare it with previously saved password
comparision is done by function str_cmp()-returns zero if strings are equal,non zero if not equal
#include<stdio.h>
char str_cmp(char *,char *);
int main(void)
{
int i=0;
char c,cmp[10],org[10]="0123456789";
printf("\nEnter your account password\ntype 0123456789\n");
for(i=0;(c=getchar())!=EOF;i++)
cmp[i]=c;
if(!str_cmp(org,cmp))
{
printf("\nLogin Sucessful");
}
else
printf("\nIncorrect Password");
return 0;
}
char str_cmp(char *porg,char *pcmp)
{
int i=0,l=0;
for(i=0;*porg+i;i++)
{
if(!(*porg+i==*pcmp+i))
{
l++;
}
}
return l;
}
There are libraries available to do this much more simply but I will assume that this is an assignment and either way it is a good learning experience. I think the problem is in your for loop in the str_cmp function. The condition you are using is "*porg+i". This is not really doing a comparison. What the compiler is going to do is go until the expression is equal to 0. That will happen once i is so large that *porg+i is larger than what an "int" can store and it gets reset to 0 (this is called overflowing the variable).
Instead, you should pass a size into the str_cmp function corresponding to the length of the strings. In the for loop condition you should make sure that i < str_size.
However, there is a build in strncmp function (http://www.elook.org/programming/c/strncmp.html) that does this exact thing.
You also have a different problem. You are doing pointer addition like so:
*porg+i
This is going to take the value of the first element of the array and add i to it. Instead you want to do:
*(porg+i)
That will add to the pointer and then dereference it to get the value.
To clarify more fully with the comparison because this is a very important concept for pointers. porg is defined as a char*. This means that you have a variable that has the memory address of a 'char'. When you use the dereference operator (*, for example *porg) on the variable, it returns the value at stored in that piece of memory. However, you can add a number to the memory location to move to a different memory location. porg + 1 is going to return the memory location after porg. Therefore, when you do *porg + 1 you are getting the value at the memory address and adding 1 to it. On the other hand, when you do *(porg + 1) you are getting the value at the memory address one after where porg is pointing to. This is useful for arrays because arrays are store their values one after another. However, a more understandable notation for doing this is: porg[1]. This says "get the value 1 after the beginning of the array" or in other words "get the second element of the array".
All conditions in C are checking if the value is zero or non-zero. Zero means false, and every other value means true. When you use this expression (*porg + 1) for a condition it is going to do the calculation (value at porg + 1) and check if it is zero or not.
This leads me to the other very important concept for programming in C. An int can only hold values up to a certain size. If the variable is added to enough where it is larger than that maximum value, it will cycle around to 0. So lets say the maximum value of an int is 256 (it is in fact much larger). If you have an int that has the value of 256 and add 1 to it, it will become zero instead of 257. In reality the maximum number is 65,536 for most compilers so this is why it is taking so long. It is waiting until *porg + i is greater than 65,536 so that it becomes zero again.
Try including string.h:
#include <string.h>
Then use the built-in strcmp() function. The existing string functions have already been written to be as fast as possible in most situations.
Also, I think your for statement is messed up:
for(i=0;*porg+i;i++)
That's going to dereference the pointer, then add i to it. I'm surprised the for loop ever exits.
If you change it to this, it should work:
for(i=0;porg[i];i++)
Your original string is also one longer than you think it is. You allocate 10 bytes, but it's actually 11 bytes long. A string (in quotes) is always ended with a null character. You need to declare 11 bytes for your char array.
Another issue:
if(!(*porg+i==*pcmp+i))
should be changed to
if(!(porg[i]==pcmp[i]))
For the same reasons listed above.
What are the size guarantees on int in Single UNIX or POSIX? This sure is a FAQ, but I can't find the answer...
With icecrime's answer, and a bit further searching on my side, I got a complete picture:
ANSI C and C99 both mandate that INT_MAX be at least +32767 (i.e. 2^15-1). POSIX doesn't go beyong that. Single Unix v1 has the same guarantee, while Single Unix v2 states that the minimum acceptable value is 2 147 483 647 (i.e. 2^31-1).
The C99 standard specifies the content of header <limits.h> in the following way :
Their implementation-defined values shall be equal or greater in magnitude
(absolute value) to those shown, with the same sign.
minimum value for an object of type int
INT_MIN -32767 // -(215 - 1)
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
maximum value for an object of type unsigned int
UINT_MAX 65535 // 216 - 1
There are no size requirements expressed on the int type.
However, the <stdint.h> header offer the additional exact-width integer types int8_t, int16_t, int32_t, int64_t and their unsigned counterpart :
The typedef name intN_t designates a
signed integer type with width N, no
padding bits, and a two’s complement
representation. Thus, int8_t denotes a
signed integer type with a width of
exactly 8 bits.
POSIX doesn't cover that. The ISO C standard guarantees that types will be able to handle at least a certain range of values but not that they'll be of a particular size.
The <stdint.h> header introduced with C99 will get you access to types like int16_t that do.