My question concerns something that should be fairly simple, but I can't make it work. What I mean is you can calculate x and y and then plot them with the plot function. But can this be done using the curve function?
I want to plot the following R function f2:
n <- 1
m <- 2
f2 <- function(x) min(x^n, x^(-m))
But this code fails:
curve(f2, 0, 10)
Any suggestions?
You need to use vectorised pmin instead of min (take a look at ?pmin to understand the difference)
f2 = function(x, n = 1, m = 2) {
pmin(x^n, x^(-m))
}
curve(f2, from = 0, to = 10)
On a side note, I would make n and m arguments of f2 to avoid global variables.
Update
To plot f2 for different arguments n and m you would do
curve(f2(x, n = 2, m = 3), from = 0, to = 10)
As has been hinted at, the main reason why the call to curve fails is because curve requires a vectorized function (in this case feed in a vector of results and get out a vector of results), while your f2() function only inputs and outputs a scalar. You can vectorize your f2 on the fly with Vectorize
n <- 1
m <- 2
f2 <- function(x) min(x^n, x^(-m))
curve(Vectorize(f2)(x), 0, 10)
Is the curve function needed or would this work?
n <- 1 # assumption
m <- 2 # assumption
f2 <- function(x) min(x^n, x^(-m))
x.range <- seq(0, 10, by=.1)
y.results <- sapply(x.range, f2) # Apply a Function over a List or Vector
# plot(x.range, y.results) old answer
plot(x.range, y.results, type="l") # improvement per #alistaire
Related
I apologise if this is a duplicate; I've read answers to similar questions to no avail.
I'm trying to integrate under a curve, given a specific formula (below) for said integration.
As a toy example, here's some data:
Antia_Model <- function(t,y,p1){
r <- p1[1]; k <- p1[2]; p <- p1[3]; o <- p1[4]
P <- y[1]; I <- y[2]
dP = r*P - k*P*I
dI = p*I*(P/(P + o))
list(c(dP,dI))
}
r <- 0.25; k <- 0.01; p <- 1; o <- 1000 # Note that r can range btw 0.1 and 10 in this model
parms <- c(r, k, p, o)
P0 <- 1; I0 <- 1
N0 <- c(P0, I0)
TT <- seq(0.1, 50, 0.1)
results <- lsoda(N0, TT, Antia_Model, parms, verbose = FALSE)
P <- results[,2]; I <- results[,3]
As I understand it, I should be able to use the auc() function from the MESS package (can I just use the integrate() function? Unclear...), which should look something like this:
auc(P, TT, from = x1, to = x2, type = "spline")
Though I don't really understand how to use the "from" and "to" arguments, or how to incorporate "u" from the original integration formula...
Using the integrate() function seems more intuitive, but if I try:
u <- 1
integrand <- function(P) {u*P}
q <- integrate(integrand, lower = 0, upper = Inf)
I get this error:
# Error in integrate(integrand, lower = 0, upper = Inf) :
# the integral is probably divergent
As you can tell, I'm pretty lost, so any help would be greatly appreciated! Thank you so much! :)
integrand is technically acceptable but right now, it's the identity function f(x) = x. The area under it from [0, inf) is infinite, i.e. divergent.
From the documentation of integrate the first argument is:
an R function taking a numeric first argument and returning a numeric vector of the same length. Returning a non-finite element will generate an error.
If instead you use a pulse function:
pulse <- function(x) {ifelse(x < 5 & x >= 0, 1, 0)}
integrate(pulse, lower = 0, upper = Inf)
#> 5 with absolute error < 8.5e-05
I need to simulate the probabilities that are computed using the function transitionProbability1D from isingLenzMC package. I want to simulate them for 10 values of bF at once and receive a vector of results but still receive only one number and I dont know why. Here is my code
N <- 100
conf0 <- genConfig1D(N)
conf1 <- flipConfig1D(conf0)
# transition probability at J=H=1/kBT=1.0, 1= p-ty metropolis 2=glauber
bF <- 1:10
J <- h <- rep(1,10)
# HERE IT DOESNT WORK EVEN THOUGHT bF IS A VECTOR
transitionProbability1D(bF, conf0, conf1, J, h, 1)
>> 0.298615
You might want to look at how to vectorize a function.
On your example, the following would probably give you what you expect:
library(isingLenzMC)
N <- 100
conf0 <- genConfig1D(N)
conf1 <- flipConfig1D(conf0)
# transition probability at J=H=1/kBT=1.0, 1= p-ty metropolis 2=glauber
bF <- 1:10
# Here I changed these inputs to single values
J <- h <- 1
# HERE IT DOESNT WORK EVEN THOUGHT bF IS A VECTOR
transitionProbability1D(bF, conf0, conf1, J, h, 1)
# Vectorize on the first argument
transitionProbability1D_vectorized <- Vectorize(transitionProbability1D, vectorize.args = "ikBT")
# Now there are as many results as input values
transitionProbability1D_vectorized(ikBT = bF, x = conf0, xflip = conf1, J = J, H = h, probSel = 1)
You could also use a (for) loop!
I want to find the maximum of a scalar function
delta <- function(a,b) {
t(f(b)) %*% Ninv %*% f(a)
}
where a and b are vectors with length=2, over a meshgrid.
The idea is basically to exchange a 2-d point (a) in design0 with a 2-d point (b) in candidate, to gain maximum increment in delta. I want to try all combinations from
design0 = matrix(c(-1, 1, -1, 1, -1, -1), 3)
# 3 by 2 matrix
candidate=expand.grid(seq(-1,1,0.01), seq(-1, 1, 0.01))
# 40401 by 2 matrix
and it should return a 3 by 40401 matrix.
I first think of something like
outer(design0, t(candidate), delta)
but it seems like outer does not work with 2 dimension.
Then I think of use mapply
mapply(delta,a=...,b=...)
I need something like expand.grid(design0, candidate) to make a and b match. That does not work either.
If they all fail, in the end I may use nested loop... but I hate loops when dealing with matrices. This is already a part inside a iteration.
Below is executable code
beta0=9; beta1=5; beta2=5;
design0 = matrix(c(-1,1,-1,1,-1,-1),3)
candidate = expand.grid( seq(-1,1,0.01), seq(-1,1,0.01))
f <- function(x){
eta = beta0 + beta1*x[1] + beta2*x[2]
sqrt(exp(-eta)/(1+exp(-eta))^2)*c(1,x[1],x[2])
}
Fmat = t(apply(design0,1,f))
Ninv = solve(t(Fmat) %*% Fmat)
delta <- function(a,b) {
t(f(b)) %*% Ninv %*% f(a)
}
I have 1 know value of x. and I too have 1 formula. y = 0.92x now I want to flip LHS to RHS expected output will be x = y/0.92 its for multiplication and division. It should handle all basic mathematical operations. Is there any package for this in R or any one have defined function in R
I don't think there is any way to accomplish what you want. Rewriting mathematical formulas while they are represented as R functions is not an easy thing to do. What you can do is use uniroot to solve functions. For example:
# function for reversing a function. y is your y value
# only possible x values in interval will be considered.
inverseFun = function(y, fun, interval = c(-1e2, 1e2), ...) {
f = function(.y, .fun, ...) y - fun(...)
uniroot(f, interval, .y = y, .fun = fun, ...)
}
# standard math functions
add = function(a, b) a + b
substract = function(a, b) a - b
multiply = function(a, b) a * b
divide = function(a, b) a / b
# test it works
inverseFun(y = 3, add, b = 1)
# 2
inverseFun(y = -10, substract, b = 1)
# -9
inverseFun(y = 30, multiply, b = 2)
# 15
inverseFun(y = 30, divide, b = 1.75)
# 52.5
The above is an example, inverseFun(y = 3, `+`, b = 1) also works although it might be less clear what is happening. A last remark is that uniroot tries to minimize a function which might be time consuming for complicated functions.
I would like to compute the convolution of two probability distributions in R and I need some help. For the sake of simplicity, let's say I have a variable x that is normally distributed with mean = 1.0 and stdev = 0.5, and y that is log-normally distributed with mean = 1.5 and stdev = 0.75. I want to determine z = x + y. I understand that the distribution of z is not known a priori.
As an aside the real world example I am working with requires addition to two random variables that are distributed according to a number of different distributions.
Does anyone know how to add two random variables by convoluting the probability density functions of x and y?
I have tried generating n normally distributed random values (with above parameters) and adding them to n log-normally distributed random values. However, I wish to know if I can use the convolution method instead. Any help would be greatly appreciated.
EDIT
Thank you for these answers. I define a pdf, and try to do the convolution integral, but R complains on the integration step. My pdfs are Log Pearson 3 and are as follows
dlp3 <- function(x, a, b, g) {
p1 <- 1/(x*abs(b) * gamma(a))
p2 <- ((log(x)-g)/b)^(a-1)
p3 <- exp(-1* (log(x)-g) / b)
d <- p1 * p2 * p3
return(d)
}
f.m <- function(x) dlp3(x,3.2594,-0.18218,0.53441)
f.s <- function(x) dlp3(x,9.5645,-0.07676,1.184)
f.t <- function(z) integrate(function(x,z) f.s(z-x)*f.m(x),-Inf,Inf,z)$value
f.t <- Vectorize(f.t)
integrate(f.t, lower = 0, upper = 3.6)
R complains at the last step since the f.t function is bounded and my integration limits are probably not correct. Any ideas on how to solve this?
Here is one way.
f.X <- function(x) dnorm(x,1,0.5) # normal (mu=1.5, sigma=0.5)
f.Y <- function(y) dlnorm(y,1.5, 0.75) # log-normal (mu=1.5, sigma=0.75)
# convolution integral
f.Z <- function(z) integrate(function(x,z) f.Y(z-x)*f.X(x),-Inf,Inf,z)$value
f.Z <- Vectorize(f.Z) # need to vectorize the resulting fn.
set.seed(1) # for reproducible example
X <- rnorm(1000,1,0.5)
Y <- rlnorm(1000,1.5,0.75)
Z <- X + Y
# compare the methods
hist(Z,freq=F,breaks=50, xlim=c(0,30))
z <- seq(0,50,0.01)
lines(z,f.Z(z),lty=2,col="red")
Same thing using package distr.
library(distr)
N <- Norm(mean=1, sd=0.5) # N is signature for normal dist
L <- Lnorm(meanlog=1.5,sdlog=0.75) # same for log-normal
conv <- convpow(L+N,1) # object of class AbscontDistribution
f.Z <- d(conv) # distribution function
hist(Z,freq=F,breaks=50, xlim=c(0,30))
z <- seq(0,50,0.01)
lines(z,f.Z(z),lty=2,col="red")
I was having trouble getting integrate() to work for different density parameters, so I came up with an alternative to #jlhoward's using Riemann approximation:
set.seed(1)
#densities to be convolved. could also put these in the function below
d1 <- function(x) dnorm(x,1,0.5) #
d2 <- function(y) dlnorm(y,1.5, 0.75)
#Riemann approximation of convolution
conv <- function(t, a, b, d) { #a to b needs to cover the range of densities above. d needs to be small for accurate approx.
z <- NA
x <- seq(a, b, d)
for (i in 1:length(t)){
print(i)
z[i] <- sum(d1(x)*d2(t[i]-x)*d)
}
return(z)
}
#check against sampled convolution
X <- rnorm(1000, 1, 0.5)
Y <- rlnorm(1000, 1.5, 0.75)
Z <- X + Y
t <- seq(0, 50, 0.05) #range to evaluate t, smaller increment -> smoother curve
hist(Z, breaks = 50, freq = F, xlim = c(0,30))
lines(t, conv(t, -100, 100, 0.1), type = "s", col = "red")