How to construct a sequence with a pattern in R - r

I would like to construct a sequence with length 50 of the following type:
Xn+1=4*Xn*(1-Xn). For your information, this is the Logistic Map for r=4. In the case of the Logistic Map with parameter r = 4 and an initial state in (0,1), the attractor is also the interval (0,1) and the probability measure corresponds to the beta distribution with parameters a = 0.5 and b = 0.5. (The Logistic Map is a polynomial mapping (equivalently, recurrence relation) of degree 2, often cited as an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations). How can I do this in R?

There are some ready to use solution on the net. I cite the general solution of mage's blog where you can find more detailed description.
logistic.map <- function(r, x, N, M){
## r: bifurcation parameter
## x: initial value
## N: number of iteration
## M: number of iteration points to be returned
z <- 1:N
z[1] <- x
for(i in c(1:(N-1))){
z[i+1] <- r *z[i] * (1 - z[i])
}
## Return the last M iterations
z[c((N-M):N)]
}
For OP example:
logistic.map(4,0.2,50,49)

This isn't really an R question, is it? More basic programming. Anyway, you probably need an accumulator and a value to process.
values <- 0.2 ## this accumulates as a vector, starting with 0.2
xn <- values ## xn gets the first value
for (it in 2:50) { ## start the loop from the second iteration
xn <- 4L*xn*(1L-xn) ## perform the sequence function
values <- c(values, xn) ## add the new value to the vector
}
values
# [1] 0.2000000000 0.6400000000 0.9216000000 0.2890137600 0.8219392261 0.5854205387 0.9708133262 0.1133392473 0.4019738493 0.9615634951 0 .1478365599 0.5039236459
# [13] 0.9999384200 0.0002463048 0.0009849765 0.0039360251 0.0156821314 0.0617448085 0.2317295484 0.7121238592 0.8200138734 0.5903644834 0 .9673370405 0.1263843622
# [25] 0.4416454208 0.9863789723 0.0537419811 0.2034151221 0.6481496409 0.9122067356 0.3203424285 0.8708926280 0.4497546341 0.9899016128 0 .0399856390 0.1535471506
# [37] 0.5198816927 0.9984188732 0.0063145074 0.0250985376 0.0978744041 0.3531800204 0.9137755744 0.3151590962 0.8633353611 0.4719496615 0 .9968527140 0.0125495222
# [49] 0.0495681269 0.1884445109

Related

singular matrix or not: conflict between determinant and rank

I made a correlation matrix from exponentially smoothed returns, using Appendix C in https://openresearch-repository.anu.edu.au/bitstream/1885/65527/2/01_Pozzi_Exponential_smoothing_weighted_2012.pdf as a guide.
It's a 101 by 101 matrix, but I don't know if it is singular or not, due to the following conflicting results:
pracma::Rank says its rank is 101;
matrixcalc::is.singular.matrix returns TRUE;
base::determinant.matrix gives a very close-to-zero value.
pracma::Rank(try.wgtd.cor)
#[1] 101
matrixcalc::is.singular.matrix(try.wgtd.cor)
#[1] TRUE
base::determinant.matrix(try.wgtd.cor, logarithm = FALSE)
#$modulus
#[1] 2.368591e-55
#attr(,"logarithm")
#[1] FALSE
#
#$sign
#[1] 1
#
#attr(,"class")
#[1] "det"
Does anyone know why/how this could be?
No no no, don't rely on determinant. A small determinant does not necessarily means singularity. For example, the following diagonal matrix is not singular at all, but has a very small determinant.
## all diagonal elements are 0.1; dimension 101 x 101
D <- diag(0.1, nrow = 101, ncol = 101)
## the obvious way to compute det(D)
prod(diag(D))
#[1] 1e-101
## use base::determinant.matrix
determinant.matrix(D, logarithm = FALSE)$modulus
#[1] 1e-101
The determinant equals the product of eigenvalues. So in general, if all eigenvalues of a matrix are smaller than 1, then the determinant will be very small for sure!
matrixcalc::is.singular.matrix is based on determinant, so do not trust it. In addition, its result is too subjective because you can tweak tol:
By contrast, pracma::Rank uses both QR and SVD factorizations to determine rank. The result is extremely reliable. Here is the source code of Rank (with my comments):
function (M)
{
if (length(M) == 0)
return(0)
if (!is.numeric(M))
stop("Argument 'M' must be a numeric matrix.")
if (is.vector(M))
M <- matrix(c(M), nrow = length(M), ncol = 1)
## detect rank by QR factorization
r1 <- qr(M)$rank
## detect rank by SVD factorization
sigma <- svd(M)$d
tol <- max(dim(M)) * max(sigma) * .Machine$double.eps
r2 <- sum(sigma > tol)
## check consistency
if (r1 != r2)
warning("Rank calculation may be problematic.")
return(r2)
}
In conclusion, your 101 x 101 matrix try.wgtd.cor actually has full rank!

Why am I getting NAs in this calculation in R?

While working on an Rcpp program, I used the sample() function, which gave me the following error: "NAs not allowed in probability." I traced this issue to the fact that the probability vector I used had NA values in it. I have no idea how. Below is some R code that captures the errors:
n.0=20
n.1=20
n.reps=1
beta0.vals=rep(seq(-.3,.1,,n.0),n.reps)
beta1.vals=rep(seq(-7,0,,n.1),n.reps)
beta.grd=as.matrix(expand.grid(beta0.vals,beta1.vals))
n.rnd=200
beta.rnd.grd=cbind(runif(n.rnd,min(beta0.vals),max(beta0.vals)),runif(n.rnd,min(beta1.vals),max(beta1.vals)))
beta.grd=rbind(beta.grd,beta.rnd.grd)
N = 22670
count = 0
for(i in 1:dim(beta.grd)[1]){ # iterate through 600 possible beta values in beta grid
beta.ind = 0 # indicator for current pair of beta values
for(j in 1:N){ # iterate through all possible Nsums
logit = beta.grd[i,1]/N*(j - .1*N)^2 + beta.grd[i,2];
phi01 = exp(logit)/(1 + exp(logit))
if(is.na(phi01)){
count = count + 1
}
}
}
cat("Total number of invalid probabilities: ", count)
Here, $\beta_0 \in (-0.3, 0.1), \beta_1 \in (-7, 0), N = 22670, N_\text{sum} \in (1, N)$. Note that $N$ and $N_\text{sum}$ are integers, whereas the beta values may not be.
Since mathematically, $\phi_{01} \in (0,1)$, I'm assuming that NAs are arising because R is not liking extremely small values. I am receiving an overwhelming amount of NA values, too. More so than numbers. Why would I be getting NAs in this code?
Include print(logit) next to count = count + 1 and you will find lots of logit > 1000 values. exp(1000) == Inf so you divide Inf by Inf which will get you a NaN and NaN is NA:
> exp(500)
[1] 1.403592e+217
> Inf/Inf
[1] NaN
> is.na(NaN)
[1] TRUE
So your problems are not too small but to large numbers coming first out of the evaluation of exp(x) with x larger then roughly 700:
> exp(709)
[1] 8.218407e+307
> exp(710)
[1] Inf
Bernhard's answer correctly identifies the problem:
If logit is large, exp(logit) = Inf.
Here is a solution:
for(i in 1:dim(beta.grd)[1]){ # iterate through 600 possible beta values in beta grid
beta.ind = 0 # indicator for current pair of beta values
for(j in 1:N){ # iterate through all possible Nsums
logit = beta.grd[i,1]/N*(j - .1*N)^2 + beta.grd[i,2];
## This one isn't great because exp(logit) can be very large
# phi01 = exp(logit)/(1 + exp(logit))
## So, we say instead
## phi01 = 1 / ( 1 + exp(-logit) )
phi01 = plogis(logit)
if(is.na(phi01)){
count = count + 1
}
}
}
cat("Total number of invalid probabilities: ", count)
# Total number of invalid probabilities: 0
We can use the more stable 1 / (1 + exp(-logit)
(to convince yourself of this, multiply your expression with exp(-logit) / exp(-logit)),
and luckily either way, R has a builtin function plogis() that can calculate these probabilities quickly and accurately.
You can see from the help file (?plogis) that this function evaluates the expression I gave, but you can also double check to assure yourself
x = rnorm(1000)
y = 1 / (1 + exp(-x))
z = plogis(x)
all.equal(y, z)
[1] TRUE

Find integral value using MC method

I have a homework in a subject called 'Monthe-Carlo methods' and I'm stuck with one task.
The task is as follows:
Using MC method find an approximate value of the integral (see the pic below),
where D is an area [0 to infinity) x [0 to infinity) x [0 to infinity).
Also, find the error value for probability 0,99.
Integral pic
What I've done so far, is in the code below.
The main question that I have, is .. Which distribution should I choose for generated values? And what's the core logic behind it?
At the moment I used just plain normal distribution with parameters (0,1) just to get some initial results, but as the results show, the error value is really big and therefore it's could not be the best solution..
Thanks in advance!
n <- 100000
alfa <- 0.01 # 1-0.99
# võtame generaatori preagu normaaljaotusest, sest integraalfunktsiooni kuju on üsnagi sarnane
# normaaljaotuse tihedusfunktsioonile
# gen <- function(n){
# return(matrix(runif(3*n, 0, Inf),ncol=3))
# }
gen <- function(n){
return(matrix(rnorm(3*n, 0, 1),ncol=3))
}
g <- function(x){
#tihedus <- dunif(x[,1],0,Inf)*dunif(x[,2],0,Inf)*dunif(x[,3],0,Inf)
tihedus <- rnorm(x[,1],0,1)*rnorm(x[,2],0,1)*rnorm(x[,3],0,1)
return( (x[,1]+x[,2])*exp(-(x[,1]+x[,2]+2*x[,3]))/(x[,1]^2+x[,2]+x[,3]+1) / tihedus*((x[,1]>=0) + (x[,2]>=0) + (x[,3]>=0)) )
}
MC(gen, g, n, alfa)

Solve systems of nonlinear equations in R / BlackScholesMerton Model

I am writing my Masters final project in which I am deriving probability of default using Black Scholes Merton Model.I have got stuck in R code. Mathematically, I want to solve this system of nonlinear equations with the package nleqslv:
library(nleqslv)
T <- 1
D1 <- 20010.75
R <- 0.8516
sigmaS <- .11
SO1 <- 1311.74
fnewton <- function(x){
y <- numeric(2)
d1 <- (log(x[1]/D1)+(R+x[2]^2/2)*T)/x[2]*sqrt(T)
d2 <- d1 - x[2]*sqrt(T)
y[1] <- SO1 - (x[1]*pnorm(d1) - exp(-R*T)*D1*pnorm(d2))
y[2] <- sigmaS*SO1 - pnorm(d1)*x[2]*x[1]
y
}
xstart <- c(1311.74,0.11)
nleqslv(xstart, fnewton, method="Broyden")
# $x
# [1] 1311.74 0.11
# $fvec
# [1] 1311.7400 144.2914
# $termcd
# [1] 6
# $message
# [1] "Jacobian is singular (see allowSingular option)"
# $scalex
# [1] 1 1
# $nfcnt
# [1] 0
# $njcnt
# [1] 1
# $iter
# [1] 1
I have tried this with many values of the 5 inputs( stated above that I have computed for 2 companies for different years), but I am not getting the final values of S0 and sigma V.
I am getting message as "Jacobian is singular (see allowSingular option)" If I allow singular Jacobean using "control=list(trace=1,allowSingular=TRUE)", then also no answer is displayed. I do not know how to obtain the solution of these 2 variables now.
I really don’t know, what I am doing wrong as I oriented my model on Teterevas slides ( on slide no.5 is her model code), who’s presentation is the first result by googeling
https://www.google.de/search?q=moodys+KMV+in+R&rlz=1C1SVED_enDE401DE401&aq=f&oq=moodys+KMV+in+R&aqs=chrome.0.57.13309j0&sourceid=chrome&ie=UTF-8#q=distance+to+default+in+R
q=distance+to+default+in+R
Like me, however more successful, she calculates the Distance to Default risk measure via the Black Scholes Merton approach. In this model, the value of equity (usually represented by the market capitalization, > SO1) can be written as a European call option.
The other variables are:
x[1]: the variable I want to derive, value of total assets
x[2]: the variable I want to derive, volatility of total assets
D1: the book value of debt (19982009)
R: a riskfree interest rate
T: is set to 1 year (time)
sigmaS: estimated (historical) equity volatility
You should be able to use the initial values of SO1 and sigmaS as starting values for nleqslv.
First of all the R code given by Tetereva doesn't seem quite correct (the variable Z should be D1 as you have named it; similar changes for her S0 and D).
I have modified Tetereva's into this:
library(nleqslv)
T <- 1
D1 <- 33404048
R <- 2.32
sigmaS <- .02396919
SO1 <- 4740291 # Ve?
fnewton <- function(x){
y <- numeric(2)
d1 <- (log(x[1]/D1)+(R+x[2]^2/2)*T)/x[2]*sqrt(T)
d2 <- d1 - x[2]*sqrt(T)
y[1] <- SO1 - (x[1]*pnorm(d1) - exp(-R*T)*D1*pnorm(d2))
y[2] <- sigmaS*SO1 - pnorm(d1)*x[2]*x[1]
y
}
xstart <- c(SO1,sigmaS)
nleqslv(xstart, fnewton, method="Broyden",control=list(trace=1))
nleqslv(xstart, fnewton, method="Newton",control=list(trace=1))
which will give the solution given by Tetereva. (I use trace=1 here just to check the iteration steps.)
I believe the value you give for R should be 8.516 and not something else. Using your values for the parameters
T <- 1
D1 <- 20010.75
R <- 8.516 # modified
sigmaS <- .11
SO1 <- 1311.74
like this
xstart <- c(1311.74,0.11)
nleqslv(xstart, fnewton, method="Broyden")
nleqslv(xstart, fnewton, method="Newton")
Then running nleqslv with these values converges very quickly.
If one uses R <- 2.32 (like Tetereva) nleqslv will also converge albeit with more iterations.
I cannot help you with what R should actually be but from Tetereva's presentation I assume R is in percentages. Since I don't have enough knowledge on the Black-Scholes model I can't be of any help for finding out what the correct values are for the various parameters. It's up to you.

Create a function with multiple parameters in R

I want to compute the following functions :
here, g(x) is the density function of a distribution. I want to compute this function for several distributions. In addition, I use the library fitdistrplus.
To create g, I use the function do.call this way :
g<-function(x) {do.call(paste("d",i,sep=""),c(list(x=x),fti$estimate))}
fti$estimate contains the parameters of the distribution i.
G(x) is the cumulative distribution computed this way :
G<-function(x) {do.call(paste("p",i,sep=""),c(list(q=x),fti$estimate))}
I compute f(x) this way :
f<function(n,x) {n*g(x)*(1-G(x))^(n-1)
At last, I compute h(x) this way :
h<- function(n) {integrate(function(x) {x*f(n,x)},0,Inf)}
However, I can't plot these functions, I get the following errors :
1: In n*g(x):
Longer object length is not a multiple of shorter object length
2: In (1-G(x))^(n-1):
Longer object length is not a multiple of shorter object length
3: In x*f(n,x) :
Longer object length is not a multiple of shorter object length
Beyond, if I juste want to plot f(n,x), I get this error :
Error in list(x=x) :'x' is missing
The minimal snipset I have is the following
#i can be "exp" "lnorm" "norm" etc...
for( i in functionsName) {
png(paste(fileBase,"_",i,"_","graphics.png",sep=""))
plot.new()
fti<-fitdist(data, i)
plotdist(data,i, para=as.list(fti[[1]]))
#fti is a datatable or datafram
#fti$estimate looks like this :
# meanlog sdlog
#8.475449 1.204958
#g
pdf<-function(x) {do.call(paste("d",i,sep=""), c(list(x=x),fti$estimate))}
#G
cdf<-function(x) do.call(paste("p",i,sep=""), c(list(q=x),fti$estimate))
#f
minLaw<- function(n,x) {n*pdf(x)*(1-cdf(x))^(n-1)}
#h
minExpectedValue<-function(n) {integrate(function(x) {x*minLaw(n,x)},0,Inf)}
#these 2 following lines give an error
plot(minExpectedValue)
plot(minLaw)
dev.off()
}
I had to do some reverse engineering to figure out your d1, q1 etc calls, but I think this is how you do it. Perhaps the original problem lies in a function call like f(n=2:3, x=1:9); in such a call n should be a single value, not a vector of values.
Even if length of x was a multiple of n length, the output would most likely not be what you really wanted.
If you try to give n a vector form, you might end up in a recycled (false) output:
> print(data.frame(n=2:3, x=1:6))
- n x
1 2 1
2 3 2
3 2 3
4 3 4
5 2 5
6 3 6
where x would be evaluated with n=2 at point x=1, n=3 at point x=2 etc. What you really would've wanted is something in the lines of:
> print(expand.grid(x=1:5, n=2:3))
- x n
1 1 2
2 2 2
3 3 2
4 4 2
5 5 2
6 1 3
7 2 3
8 3 3
9 4 3
10 5 3
You could do this by calling separately for each n value:
lapply(2:3, FUN=function(n) (f(n, x=1:5)))
#[[1]]
#[1] 0.0004981910 0.0006066275 0.0007328627 0.0008786344 0.0010456478
#
#[[2]]
#[1] 0.0007464956 0.0009087272 0.0010974595 0.0013152213 0.0015644676
Did you use the same fti for all the distribution fits, even though it should've been different? Or does the i in fti refer to index i and it was a list of fits in form of ft[[i]]?
Below is a wrapper function, which is called separately for each n-value (and distribution i):
wrapper <- function(i, x, n, fti){
# As was provided by OP
g<-function(x) {do.call(paste("d",i,sep=""),c(list(x=x),fti$estimate))}
G<-function(x) {do.call(paste("p",i,sep=""),c(list(q=x),fti$estimate))}
# does the i in fti refer to fit of i:th distribution, i.e. should it be a list where i:th location in ft is i:th distribution estimates?
f<-function(n,x) {n*g(x)*(1-G(x))^(n-1)}
# was missing a '-' and a '}'
h<- function(n) {integrate(function(x) {x*f(n,x)},0,Inf)}
list(gres = g(x), Gres = G(x), fres = f(n,x), hres = h(n))
}
# Example data
require("fitdistrplus")
data(groundbeef)
serving <- groundbeef$serving
# Gumbel distribution
d1 <- function(x, a, b) 1/b*exp((a-x)/b)*exp(-exp((a-x)/b))
p1 <- function(q, a, b) exp(-exp((a-q)/b))
q1 <- function(p, a, b) a-b*log(-log(p))
fti1 <- fitdist(serving, "1", start=list(a=10, b=10))
#> fti1$estimate
# a b
#56.95893 29.07871
# Normal distribution
# dnorm, pnorm and qnorm are available in the default environment
d2 <- dnorm
p2 <- pnorm
q2 <- qnorm
fti2 <- fitdist(serving, "2", start=list(mean=0, sd=1))
#> fti2$estimate
# mean sd
#73.67743 35.92581
# Sequence of x-values
xs <- seq(-100, 100, by=1)
print((resultdist1n2 <- wrapper(i=1, x=xs, n=2, fti=fti1))$hres)
print((resultdist1n3 <- wrapper(i=1, x=xs, n=3, fti=fti1))$hres)
print((resultdist2n2 <- wrapper(i=2, x=xs, n=2, fti=fti2))$hres)
print((resultdist2n3 <- wrapper(i=2, x=xs, n=3, fti=fti2))$hres)
plot(xs, resultdist1n2$fres, col=1, type="l", ylim=c(0,0.025), xlab="x", ylab="f(n, x)")
points(xs, resultdist1n3$fres, col=2, type="l")
points(xs, resultdist2n2$fres, col=3, type="l")
points(xs, resultdist2n3$fres, col=4, type="l")
legend("topleft", legend=c("Gamma (i=1) n=2", "Gamma (i=1) n=3", "Normal (i=2) n=2", "Normal (i=2) n=3"), col=1:4, lty=1)
And the results of your desired h as found in resultdist1n2$hres etc:
h(n=2) for distribution i=1:
53.59385 with absolute error < 0.00022
h(n=3) for distribution i=1:
45.23146 with absolute error < 4.5e-05
h(n=2) for distribution i=2:
53.93748 with absolute error < 1.1e-05
h(n=3) for distribution i=2:
44.06331 with absolute error < 2e-05
EDIT: Here's how one uses the lapply function to call for each of the vector of n values 0<=n<=256:
ns <- 0:256
res1 <- lapply(ns, FUN=function(nseq) wrapper(i=1, x=xs, n=nseq, fti=fti1))
par(mfrow=c(1,2))
plot.new()
plot.window(xlim=c(-100,100), ylim=c(0, 0.05))
box(); axis(1); axis(2); title(xlab="x", ylab="f(n,x)", main="f(n,x) for gamma (i=1), n=0:256")
for(i in 1:length(ns)) points(xs, res1[[i]]$fres, col=rainbow(257)[i], type="l")
# perform similarly for the other distributions by calling with i=2, fti=fti2
# h as a function of n for dist i=1
plot(ns, unlist(lapply(res1, FUN=function(x) x$hres$value)), col=rainbow(257), xlab="n", ylab="h(n)", main="h(n) for gamma (i=1), n=0:256")
I would plot each distribution i separately like this.
The problem is that the plot method for a function expects that the function will be vectorised. In other words, if given an argument of length N, it should return a vector of results also of length N.
Your minExpectedValue doesn't satisfy this; it expects that n will be a scalar, and returns a scalar. You can quickly fix this up with Vectorize. You also need to specify the name of the argument to plot over, in this case n.
minExpectedValue <- Vectorize(minExpectedValue)
plot(minExpectedValue, xname="n")

Resources