generating standard normals using a double exponential - r

I'm trying to write a code for generating standard normals using a double exponential distribution. In other words,
x is a double exponential that I've already coded correctly here:
double.exponential.rv<-function(i)
{
u<-runif(1)
x<--log(1-u)
v<-runif(1)
if (v<.5) x<-(-x)
return(x)
}
where i=1
y is a normal distribution from 0 to (sqrt(exp(1)/(2*pi)))*exp(-abs(x)))
so far my code is this:
std.normal.rv<-function(i)
{
while(1)
{
x<-double.exponential.rv(1)
y<-runif(1)*(sqrt(exp(1)/(2*pi)))*exp(-abs(x))
if (y<=(sqrt(exp(1)/(2*pi)))*exp(-abs(x))) return(x)
}
hist(x,nclass=100,freq=FALSE)
}
I'm not quite sure what I'm doing wrong but I receive the Error:no function to return from, jumping to top level
And it doesn't plot. Any suggestions to fix my code?
Thanks!

Related

How to suppress annoying stream of warnings from pointcloudlibrary `SampleConsensusModelPlane::optimizeModelCoefficients`

I have this function for fitting a plane to a pointcloud using PCL's sac model fitting. I want the best result I can get, so I want to run with seg.setOptimizeCoefficients(true).
The problem is that a lot of the time, the pointcloud passed in, will not have enough points to optimise coefficients, so I get a continuous stream of:
[pcl::SampleConsensusModelPlane::optimizeModelCoefficients] Not enough inliers found to optimize model coefficients (0)! Returning the same coefficients.
I would like to have coefficient optimisation to run when it can, and when it can't to just carry on without polluting the CLI output with many red warning messages.
according to this issue this message just means that there are fewer than 3 inlier points for the SAC model fitting. I do extract the inlier points, so I could manually check if there are 3 or more. But I can't see how to do this first, and THEN find the optimized model coefficients. Is there a way?
inline void fit_plane_to_points(
const pcl::PointCloud<pcl::PointXYZI>::ConstPtr& det_points,
const pcl::ModelCoefficients::Ptr& coefficients,
const Eigen::Vector3f& vec,
const pcl::PointCloud<pcl::PointXYZI>::Ptr& inlier_pts) {
// if no det points to work with, don't try and segment
if (det_points->size() < 3) {
return;
}
// fit surface point samples to a plane
pcl::PointIndices::Ptr inlier_indices(new pcl::PointIndices);
pcl::SACSegmentation<pcl::PointXYZI> seg;
seg.setModelType(pcl::SACMODEL_PERPENDICULAR_PLANE);
// max allowed difference between the plane normal and the given axis
seg.setEpsAngle(sac_angle_threshold_);
seg.setAxis(vec);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setDistanceThreshold(sac_distance_threshold_);
seg.setMaxIterations(1000);
seg.setInputCloud(det_points);
seg.setOptimizeCoefficients(sac_optimise_coefficients_);
seg.segment(*inlier_indices, *coefficients);
if (inlier_indices->indices.empty()) {
// if no inlier points don't try and extract
return;
}
// extract the planar points
pcl::ExtractIndices<pcl::PointXYZI> extract;
extract.setInputCloud(det_points);
extract.setIndices(inlier_indices);
extract.setNegative(false);
extract.filter(*inlier_pts);
return;
}
I would say the best way to do this is to disable setOptimizeCoefficients and then do that manually after seg.segment. You basically have to recreate these lines: https://github.com/PointCloudLibrary/pcl/blob/10235c9c1ad47989bdcfebe47f4a369871357e2a/segmentation/include/pcl/segmentation/impl/sac_segmentation.hpp#L115-L123 .
You can access the model via getModel() (https://pointclouds.org/documentation/classpcl_1_1_s_a_c_segmentation.html#ac7b9564ceba35754837b4848cf448d78).
Ultimately got it working, with advice from IBitMyBytes by setting seg.setOptimizeCoefficients(false); and then manually optimising after doing my own check:
// if we can, optimise model coefficients
if (sac_optimise_coefficients_ && inlier_indices->indices.size() > 4) {
pcl::SampleConsensusModel<pcl::PointXYZI>::Ptr model = seg.getModel();
Eigen::VectorXf coeff_refined;
Eigen::Vector4f coeff_raw(coefficients->values.data());
model->optimizeModelCoefficients(inlier_indices->indices,
coeff_raw, coeff_refined);
coefficients->values.resize(coeff_refined.size());
memcpy(&coefficients->values[0], &coeff_refined[0],
coeff_refined.size() * sizeof (float));
// Refine inliers
model->selectWithinDistance(coeff_refined, sac_distance_threshold_,
inlier_indices->indices);
}

Square Roots by Newton's Method (SCIP example 1.1.7) in R code

I want to apply Newton's Method for square root through iterations in RStudio, but I keep getting error
"Error: C stack usage 7969204 is too close to the limit"
when I put a wrong sqrt in the 'g'. Instead, the code works fine when I write directly the right number (example: sqriter(2,4) --> 2)
Below is the code I wrote for it.
thank you for your help!
sqriter <- function(g,x){
ifelse(goodguess(g,x), g, sqriter(improve(g,x), x))
}
goodguess <- function(g,x){
abs(g*g-x)<0.001
}
average <- function(g,x){
((g+x)/2)
}
improve <- function(g,x){
average(g, (g/x))
}

Combine into function and iterate

I am attempting to combine a series of loops/functions into one all-encompassing function to then be able to see the result for different input values. While the steps work properly when standalone (and when given just one input), I am having trouble getting the overall function to work. The answer I am getting back is a vector of 1s, which is incorrect.
The goal is to count the number of occurrences of consecutive zeroes in the randomly generated results, and then to see how the probability of consecutive zeroes occurring changes as I change the initial percentage input provided.
Does anyone have a tip for what I'm doing wrong? I have stared at this at several separate points now but cannot figure out where I'm going wrong. Thanks for your help.
### Example
pctgs_seq=seq(0.8,1,.01)
occurs=20
iterations=10
iterate_pctgs=function(x) {
probs=rep(0,length(pctgs_seq))
for (i in 1:length(pctgs_seq)) {
all_sims=lapply(1:iterations, function (x) ifelse(runif(occurs) <= i, 1, 0))
totals=sapply(all_sims,sum)
consec_zeroes=function (x) {
g=0
for (i in 1:(length(x)-1))
{ g= g+ifelse(x[i]+x[i+1]==0,1,0) }
return (g) }
consec_zeroes_sim=sapply(all_sims,consec_zeroes)
no_consec_prob=sum(consec_zeroes_sim==0)/length(consec_zeroes_sim)
probs[i]=no_consec_prob }
return (probs)
}
answer=iterate_pctgs(pctgs_seq)

Find min value (parameters estmation) based on recurrence equations

Sorry for trivial question, but I`m not a programmer. Do I transformed the following tasks in the form of R function OK?
I have recurrence equations, e.g.(p1_par,...,p4_par-parameters to find):
z1[i+1]= z1[i]+p1_par*p2_par
z12[i+1]= z12[i]+(p1_par*z1[i]-p3_par*z1z2[i]-p4_par)*p2_par
z1z2[i+1]=z1z2[i]+(-p3_par*z12[i]-p4_par*z1z2[i])*p2_par
i=1,...,5
with the initial conditions for i=0:
z1_0=1.23
z12_0=1
z1z2_0=0
and t=6, y=c(0.1,0.06,0.08,0.04,0.05,0.01)
I want to find parameters based on min value of function e.g. like this:
(-2*p1_par*z1[i]-z12[i]+y[i+1]^2+2*p3_par*z1z2[i]+2*p4_par*z1z3[i])^2
I try to build the function in R like:
function1=function(p1_par,p2_par,p3_par,p4_par,y,t){
ep=1
summa=0
result=rep(1,t)
for(i in 1:t){
z1_0=1.23
z12_0=1
z1z2_0=0
z1[1]=z1_0+p1_par*p2_par
z12[1]=z12_0+(p1_par*z1_0-*p3_par*z1z2_0-*p4_par)*p2_par
z1z2[1]=z1z2_0+(-p3_par*z12_0-p4_par*z1z2_0)*p2_par
z1[i+1]= z1[i]+p1_par*p2_par
z12[i+1]= z12[i]+p1_par*z1[i]-p3_par*z1z2[i]-p4_par)*p2_par
z1z2[i+1]=z1z2[i]+(-p3_par*z12[i]-p4_par*z1z2[i])*p2_par
if(i==1) {
result[ep]=(-2*p1_par*z1_0-z12_0+y[i+1]^2+2*p3_par*z1z2_0+2*p4_par*z1z3_0)^2
} else {
result[ep]=(-2*p1_par*z1[i]-z12[i]+y[i+1]^2+2*p3_par*z1z2[i]+2*p4_par*z1z3[i])^2
}
summa<<-summa+result[ep]
ep=ep+1
}
return(result)
}
Do I transformed task of the R function correct? Results from other softwares (like Math) differs. Thanks in advance for help.
PPS

how to subtract two vectors in OpenBUGS

I am having a very hard time trying to subtract two vectors in my OpenBUGS model. The last line of the code below keeps giving the error "expected right parenthesis error":
model {
for ( i in 1:N) {
for(j in 1:q) {
vv[i,j] ~ dnorm(vz[i,j],tau.eta[j])
}
vz[i,1:q] ~ dmnorm(media.z[i,], K.delta[,])
for(j in 1:q) {
mean.z[i,j] <- inprod(K[i,] , vbeta[j,])
}
K[i,1] <- 1.0
for(j in 1:N) {
K[i,j+1] <- sum(ve[,i] - ve[,j])
}
}
If I change that line to K[i,j+1] <- sum(ve[,i]) - sum(ve[,j]), then the model works fine, but that is not what I want to do. I would like to subtract element-wise.
I searched SO for OpenBUGS, but there are only a few unrelated topics:
OpenBUGS - Variable is not defined
OpenBUGS: missing value in Bernoulli distribution
In Stats Stack Exchange there is this post which is close, but I still could not make how to implement this in my model:
https://stats.stackexchange.com/questions/20653/vector-multiplication-in-bugs-and-jags/20739#20739
I understand I have to write a for loop, but this thing is sure giving me a big headache. :)
I tried changing that line to:
for(k in 1:p) { temp [k] <- ve[k,i] - ve[k,j] }
K[i,j+1] <- sum(temp[])
where 'p' is the number of rows in each 've'. Now I keep getting the error "multiple definitions of node temp[1]".
I could definitely use some help. It will be much appreciated.
Best regards to all and thanks in advance!
PS: I wanted to add the tag "OpenBUGS" to this question but unfortunately I couldn't because it would be a new tag and I do not have enough reputation. I added "winbugs" instead.
The "multiple definitions" error is because temp[k] is redefined over and over again within a loop over i and another loop over j - you can only define it once. To get around that, use i and j subscripts like
for(k in 1:p) { temp[k,i,j] <- ve[k,i] - ve[k,j] }
K[i,j+1] <- sum(temp[,i,j])
Though if that compiles and runs, I'd check the results to make sure that's exactly what you want mathematically.

Resources