I have a string like that "2.1648797E -05" and I need to format it to convert "0.00021648797"
Is there any solution to do this conversion
try to use double or long long
cout << setiosflags(ios::fixed) << thefloat << endl;
An important characteristic of floating point is that they do not have precision associated with all the significant figures back to the decimal point for large values. The "scientific" display reasonably reflects the inherent internal storage realities.
In C++ you can use std::stringstream First print the number, then read it as double and then print it using format specifiers to set the accuracy of the number to 12 digits. Take a look at this question for how to print decimal number with fixed precision.
If you are really just going from string representation to string representation and precision is very important or values may leave the valid range for doubles then I would avoid converting to a double.
Your value may get altered by that due to precision errors or range problems.
Try writing a simple text parser. Roughly like that:
Read the digits, omitting the decimal point up to the 'E' but store the decimal point position.
After the 'E' read the exponent as a number and add that to your stored decimal position.
Then output the digits again properly appending zeros at beginning or end and inserting the decimal point.
There are unclear issues here
1. Was the space in "2.1648797E -05" intended, let's assume it is OK.
2. 2.1648797E-05 is 10 times smaller than 0.00021648797. Assume OP meant "0.000021648797" (another zero).
3. Windows is not tagged, but OP posted a Windows answer.
The major challenge here, and I think is the OP's core question is that std::precision() has different meanings in fixed versus default and the OP wants the default meaning in fixed.
Precision field differs between fixed and default floating-point notation. On default, the precision field specifies the maximum number of useful digits to display both before and after the decimal point, possible using scientific notation, while in fixed, the precision field specifies exactly how many digits to display after the decimal point.
2 approaches to solve this: Change the input string to a number and then output the number in the new fixed space format - that is presented below. 2nd method is to parse the input string and form the new format - not done here.
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
#include <cmath>
#include <cfloat>
double ConvertStringWithSpaceToDouble(std::string s) {
// Get rid of pesky space in "2.1648797E -05"
s.erase (std::remove (s.begin(), s.end(), ' '), s.end());
std::istringstream i(s);
double x;
if (!(i >> x)) {
x = 0; // handle error;
}
std::cout << x << std::endl;
return x;
}
std::string ConvertDoubleToString(double x) {
std::ostringstream s;
double fraction = fabs(modf(x, &x));
s.precision(0);
s.setf(std::ios::fixed);
// stream whole number part
s << x << '.';
// Threshold becomes non-zero once a non-zero digit found.
// Its level increases with each additional digit streamed to prevent excess trailing zeros.
double threshold = 0.0;
while (fraction > threshold) {
double digit;
fraction = modf(fraction*10, &digit);
s << digit;
if (threshold) {
threshold *= 10.0;
}
else if (digit > 0) {
// Use DBL_DIG to define number of interesting digits
threshold = pow(10, -DBL_DIG);
}
}
return s.str();
}
int main(int argc, char* argv[]){
std::string s("2.1648797E -05");
double x = ConvertStringWithSpaceToDouble(s);
s = ConvertDoubleToString(x);
std::cout << s << std::endl;
return 0;
}
thanks guys and i fix it using :
Decimal dec = Decimal.Parse(str, System.Globalization.NumberStyles.Any);
Related
I'm learning C++, and encountering these problems in a simple program, so please help me out.
This is the code
#include<iostream>
using std::cout;
int main()
{ float pie;
pie = (22/7);
cout<<"The Value of Pi(22/7) is "<< pie<<"\n";
return 0;
}
and the output is
The Value of Pi(22/7) is 3
Why is the value of Pi not in decimal?
That's because you're doing integer division.
What you want is really float division:
#include<iostream>
using std::cout;
int main()
{
float pie;
pie = float(22)/7;// 22/(float(7)) is also equivalent
cout<<"The Value of Pi(22/7) is "<< pie<<"\n";
return 0;
}
However, this type conversion: float(variable) or float(value) isn't type safe.
You could have gotten the value you wanted by ensuring that the values you were computing were floating point to begin with as follows:
22.0/7
OR
22/7.0
OR
22.0/7.0
But, that's generally a hassle and will involve that you keep track of all the types you're working with. Thus, the final and best method involves using static_cast:
static_cast<float>(22)/7
OR
22/static_cast<float>(7)
As for why you should use static_cast - see this:
Why use static_cast<int>(x) instead of (int)x?
pie = (22/7);
Here the division is integer division, because both operands are int.
What you intend to do is floating-point division:
pie = (22.0/7);
Here 22.0 is double, so the division becomes floating-point division (even though 7 is still int).
The rule is that IF both operands are integral type (such as int, long, char etc), then it is integer division, ELSE it is floating-point division (i.e when even if a single operand is float or double).
Use:
pi = 22/7.0
If u give the two operands to the / operator as integer then the division performed will be integer division and a float will not be the result.
I need to generate samples in C++ that follow the hypergeometric distribution. But, for my case I can approximate it with the binomial distribution without any problem.
Thus I'd like to use the std implementation in C++11. If I generate many samples at calculate the probability I get different values from the one R tells me. What is more, the difference does not get any smaller when increase the number of samples increases. The parameters are the same for R and C++.
Thus the question: Why do I not get the same results and what can I do/which should I trust?
See below, the R and C++ code. The C++ program calculates the difference to the R values. Even if I let the program run for quite a while this numbers don't get smaller but just wiggle around the E-5, E-6, E-7 magnitude.
R:
dbinom(0:2, 2, 0.48645948945615974379)
#0.26372385596962805154 0.49963330914842424280 0.23664283488194759464
C++:
#include <iostream>
#include <iomanip>
#include <random>
using namespace std;
class Generator {
public:
Generator();
virtual ~Generator();
int binom();
private:
std::random_device randev;
std::mt19937_64 gen;
std::binomial_distribution<int> dist;
};
Generator::Generator() : randev(), gen(randev()), dist(2,0.48645948945615974379) { }
Generator::~Generator() {}
int Generator::binom() { return dist(gen); }
int main() {
Generator rd;
const double nrolls = 10000000; // number of experiments
double p[3]={};
for (int k=1; k<100; ++k) {
for (int i=0; i<nrolls; ++i) {
int number = rd.binom();
++p[number];
}
cout << "Samples=" << setw(8) << nrolls*k <<
" dP(0)="<<setw(13)<<p[0]/(nrolls*k)-0.26372385596962805154<<
" dP(1)="<<setw(13)<<p[1]/(nrolls*k)-0.49963330914842424280<<
" dP(2)="<<setw(13)<<p[2]/(nrolls*k)-0.23664283488194759464<<endl;
}
cout<<"end";
return 0;
}
A selective output:
Samples= 1e+07 dP(0)= -2.0056e-05 dP(1)= 9.49909e-05 dP(2)= -7.49349e-05
Samples= 1e+08 dP(0)= 1.5064e-05 dP(1)= 3.43609e-05 dP(2)= -4.94249e-05
Samples= 9.9e+08 dP(0)= -2.06449e-05 dP(1)= 5.93429e-06 dP(2)= 1.47106e-05
This should really be a comment.
I don't see anything wrong with your numbers. You are doing 10**9 repetitions. Hence by the central limit theorem you should see accuracy around 10**(-4.5). That is indeed what you are seeing. That the signs of dP(0) and dP(2) fluctuate is another good sign. If you run your program multiple times, do the signs on the last line always show the same pattern. If not, that is another good sign.
Btw R is giving you way too many digits in my opinion. With doubles you only have about 15 digits of accuracy.
I have a microcontroller and I am sampling the values of an LM335 temperature sensor.
The LCD library that I have allows me to display the hexadecimal value sampled by the 10-bit ADC.
10bit ADC gives me values from 0x0000 to 0x03FF.
What I am having trouble is trying to convert the hexadecimal value to a format that can be understood by regular humans.
Any leads would be greatly appreciated, since I am completely lost on the issue.
You could create a "string" into which you construct the decimal number like this (constants depend on what size the value actually, I presume 0-255, whether You want it to be null-terminated, etc.):
char result[4];
char i = 3;
do {
result[i] = '0' + value % 10;
value /= 10;
i--;
}
while (value > 0);
Basically, your problem is how to split a number into decimal digits so you can use your LCD library and send one digit to each cell.
If your LCD is based on 7-segment cells, then you need to output a value from 0 to 9 for each digit, not an ASCII code. The solution by #Roman Hocke is fine for this, provided that you don't add '0' to value % 10
Another way to split a number into digits is to convert it into BCD. For that, there is an algorithm named "double dabble" which allows you to convert your number into BCD without using divisions nor module operations, which can be nice if your microcontroller has no provision for division operation, or this is slower than you need.
"Double dable" algorithm sounds perfect for microcontrollers without provision for the division operation. However, a quick oversight of such algorithm in the Wikipedia shows that it uses dynamic memory, which seems to be worst than a routine for division. Of course, there must be an implementation out there that are not using calls to malloc() and friends.
Just to point out that Roman Hocke's snippet code has a little mistake. This version works ok for decimals in the range 0-255. It can be easily expand it to any range:
void dec2str(uint8_t val, char * res)
{
uint8_t i = 2;
do {
res[i] = '0' + val % 10;
val /= 10;
i--;
} while (val > 0);
res[3] = 0;
}
I was given an assignment to create a procedure that scans a float, called getfloat.
for some reason, I am getting random values. If I enter "1" it prints 49.Why does this happen? And also, when i input values, I can't see them on the screen? when I use scanf for example i see what i hit, on the little black screen. but now the screen is just blank, and when i click enter it shows a bad output:
Example - input: -1. Output: 499.00000
Here is my code:
#include <stdio.h>
#include <conio.h>
#include <math.h>
#include <ctype.h>
void getfloat(float* num);
void main()
{
float num=0;
printf("Enter the float\n");
getfloat(&num);
printf("\nThe number is %lf\n",num);
getch();
}
void getfloat(float* num)
{
float c,sign=1,exponent=10;
c=getch();
if((!isdigit(c))&&(c!='+')&&(c!='-')) //if it doesnt start with a number a + or a -, its not a valid input
{
printf("Not a number\n");
return;
}
if(c=='-') //if it starts with a minus, make sign negative one, later multiply our number by sign
sign=-1;
for(*num=0;isdigit(c);c=getch())
*num=(*num*10)+c; //scan the whole part of the number
if(c!='.') //if after scanning whole part, c isnt a dot, we finished
return;
do //if it is a dot, scan fraction part
{
c=getch();
if(isdigit(c))
{
*num+=c/exponent;
exponent*=10;
}
}while(isdigit(c));
*num*=sign;
}
There are a number of issues.
1) Your posted code does not match your example "input: -1. Output: 499.00000", I get 0 due the lack of a getch() after finding a '-'. See #6.
1) 'c' is a character. When you enter '1', c took on a code for the letter 1, which in your case being ASCII coding, is 49. To convert a digit from its ASCII value to a number value, subtract 48 (the ASCII code for the letter '0', often done as c - '0'
*num=(*num*10)+c;
*num+=c/exponent;
becomes
*num = (*num*10) + (c-'0');
*num += (c-'0')/exponent;
2) Although you declare c as a float, recommend you declare it as an int. int is the return type from getch().
3) Function getch() is "used to get a character from console but does not echo to the screen". That is why you do not see them. Consider getchar() instead.
4) [Edit: delete Avoid =-. Thank-you #Daniel Fischer]
5) Your exponential calculation needs rework. Note: your exponent could receive a sign character.
6) When you test if(c=='-'), you do not then fetch another c. You also might want to test for else if(c=='+') and consume that c.
Good luck in your C journey.
49 is the Ascii code for the number 1. So when (0'<=c && c <='9') you need to subtract '0' to get the number itself.
A small hint: 49 is the ASCII for the character 1. You are using getch(), which gives you the return value char.
Which of the following two approches is more efficient on an ATmega328P?
unsigned int value;
unsigned char char_high, char_low;
char_high = value>>8;
value = value<<8;
char_low = value>>8;
OR
unsigned int value;
unsigned char char_high, char_low;
char_high = value>>8;
char_low = value & 0xff;
You really should measure. I won't answer your question (since you'd benefit more from measuring than I would), but I'll give you a third option:
struct {
union {
uint16_t big;
uint8_t small[2];
};
} nums;
(be aware of the difference between big endian and little endian here)
One option would be to measure it (as has already been said).
Or, compile both and see what the assembly language output looks like.
but actually, the 2nd code you have won't work - if you take value << 8 and assign it to a char, all you get is zero in the char. The subsequent >>8 will still leave you with zero.