Bad output taylor series sinx - math

i'm trying to write a program that gets from the user a value x and prints sinx using taylor series. but my output is bad. the output i get is not even a number, its -1.#IND00 regardless of what i input.
here's my code
#include <stdio.h>
#include <conio.h>
void main()
{
int i;
double x,sum,last;
sum=(double)0;
scanf("%f",&x);
last=x;
sum=last;
for(i=1;i<10;i++)
{
last*=(double)(-x*x)/((2*i)*(2*i+1));
sum+=last;
}
printf("%f",sum);
getch();
}

I can see one problem:
scanf("%f",&x);
x is a double, so you need the l, i.e. "%lf".
[true but irrelevant point about how this isn't the right formula for sinh, even though sinh is nowhere mentioned in the question, redacted..]

Related

Multiple pointers in different objects pointing to same variable

I'm getting segmentation errors, when I define points in different objects which all point to the same variable. I also tried implementing it with shared pointers but so far it hasn't worked out. For example:
double var; //global var
int main(){
double *point_to_var = &var;
typeA A(point_to_var);
typeB B(point_to_var);
typeC C(point_to_var);
var = 10.;
B.sum(10.);
C.sum(10.);
}
struct typeA{
double *ptv;
A(double *ptvv): ptv(ptvv){}
}
struct typeB{
double *ptv;
B(double *ptvv): ptv(ptvv){}
double sum(double x);
}
struct typeC{
double *ptv;
C(double *ptvv): ptv(ptvv){}
double sum(double x);
}
double typeB::sum(double x){
return x + *ptv;
}
double typeC::sum(double x){
return x + *ptv;
}
I would have expected C.sum(10.) to return a value of 20 in this case, since *ptv points to the address of var which equals 10, however it crashes with a segmentation error. My code is more complicated than what I've shown here, but the idea is the same. It crashes when I try to use *ptv inside functions defined within objects. The code compiles on the command line, but on Xcode, inside of segmentation error I get exc_bad_access.
Using shared pointers (at least the way I did it) didn't seem to fix the problem. Is it possible to fix this without just using a global variable inside the objects?

Running an executable in R that takes user input

I made some simple code in C to calculate the squared of the number that a user inputs, which can be seen here:
#include <stdio.h>
int squared(int *x);
int main()
{
int num = 0;
printf("Enter an integer: \n");
scanf("%d",&num);
squared(&num);
printf("Your number squared is: %d\n",num);
system("pause");
}
int squared(int *x)
{
*x *=(*x);
}
I would like to call this in R. So I put the executable in my PATH and used system("Practice.exe") in RStudio, but this skipped the user input. I do not want to simply call a function in C, as the goal of this is to run an executable with complicated C code so that it can be wrapped in a tcltk gui in R. How can I get user input from an executable in R?

Assigned a value that is never used (simple program)

#include <stdio.h>
#include <conio.h>
int main ()
{
int numc;
puts ("NUMBER PLEASE");
numc=getchar();
printf ("%d");
getch ();
return 0;
}
I get the warning numc is assigned a value that is never used, while I'm trying to get the value. Please help.
Did you mean:
printf ("%d", numc);
?
That would use the value the the compiler is warning you about.
you never used numc value after assigning.. that's why it is giving warning..

Accuracy: C++11's binomial_distribution<int> not coincide with the what R returns

I need to generate samples in C++ that follow the hypergeometric distribution. But, for my case I can approximate it with the binomial distribution without any problem.
Thus I'd like to use the std implementation in C++11. If I generate many samples at calculate the probability I get different values from the one R tells me. What is more, the difference does not get any smaller when increase the number of samples increases. The parameters are the same for R and C++.
Thus the question: Why do I not get the same results and what can I do/which should I trust?
See below, the R and C++ code. The C++ program calculates the difference to the R values. Even if I let the program run for quite a while this numbers don't get smaller but just wiggle around the E-5, E-6, E-7 magnitude.
R:
dbinom(0:2, 2, 0.48645948945615974379)
#0.26372385596962805154 0.49963330914842424280 0.23664283488194759464
C++:
#include <iostream>
#include <iomanip>
#include <random>
using namespace std;
class Generator {
public:
Generator();
virtual ~Generator();
int binom();
private:
std::random_device randev;
std::mt19937_64 gen;
std::binomial_distribution<int> dist;
};
Generator::Generator() : randev(), gen(randev()), dist(2,0.48645948945615974379) { }
Generator::~Generator() {}
int Generator::binom() { return dist(gen); }
int main() {
Generator rd;
const double nrolls = 10000000; // number of experiments
double p[3]={};
for (int k=1; k<100; ++k) {
for (int i=0; i<nrolls; ++i) {
int number = rd.binom();
++p[number];
}
cout << "Samples=" << setw(8) << nrolls*k <<
" dP(0)="<<setw(13)<<p[0]/(nrolls*k)-0.26372385596962805154<<
" dP(1)="<<setw(13)<<p[1]/(nrolls*k)-0.49963330914842424280<<
" dP(2)="<<setw(13)<<p[2]/(nrolls*k)-0.23664283488194759464<<endl;
}
cout<<"end";
return 0;
}
A selective output:
Samples= 1e+07 dP(0)= -2.0056e-05 dP(1)= 9.49909e-05 dP(2)= -7.49349e-05
Samples= 1e+08 dP(0)= 1.5064e-05 dP(1)= 3.43609e-05 dP(2)= -4.94249e-05
Samples= 9.9e+08 dP(0)= -2.06449e-05 dP(1)= 5.93429e-06 dP(2)= 1.47106e-05
This should really be a comment.
I don't see anything wrong with your numbers. You are doing 10**9 repetitions. Hence by the central limit theorem you should see accuracy around 10**(-4.5). That is indeed what you are seeing. That the signs of dP(0) and dP(2) fluctuate is another good sign. If you run your program multiple times, do the signs on the last line always show the same pattern. If not, that is another good sign.
Btw R is giving you way too many digits in my opinion. With doubles you only have about 15 digits of accuracy.

How can I store function pointer in vector?

like: vector<void *(*func)(void *)>...
You can declare a vector of pointers to functions taking a single void * argument and returning void * like this:
#include <vector>
std::vector<void *(*)(void *)> v;
If you want to store pointers to functions with varying prototypes, it becomes more difficult/dangerous. Then you must cast the functions to the right type when adding them to the vector and cast them back to the original prototype when calling. Just an example how ugly this gets:
#include <vector>
int mult(int a) { return 2*a; }
int main()
{
int b;
std::vector<void *(*)(void *)> v;
v.push_back((void *(*)(void *))mult);
b = ((int (*)(int)) v[0])(2); // The value of b is 4.
return 0;
}
You can use typedef's to partially hide the function casting syntax, but there is still the danger of calling a function as the wrong type, leading to crashes or other undefined behaviour. So don't do this.
// shorter
std::vector<int (*)(int)> v;
v.push_back(mult);
b = v[0](2); // The value of b is 4.
Storing a function in vector might be a difficult task as illustrated above. In that case if u want to dynamically use a function u can also store a function in pointer which is much easier. Main advantage of this is u can store any type of function either it is a normal function or a paramatrized one(having some input as parametrs). Complete process is described in the link given below with examples...just have a look...!!!
how can we store Function in pointer

Resources