I am programming a function in order to get an over the sample simulation. My function is the following:
oos <- function(alpha, rho,rv){
ar_oos <-(alpha + rho*rv)
return(ar_oos)
}
I define now arbitrary the values of alpha and rho:
rho <-0.4
alpha <- 45
Lets create a matrix to save our results:
results <- matrix(NA, nrow=(1239), ncol=1)
The first 240 values will be a stationary time series:
results[1:240] <- rnorm(240, 0, 2)
What I need now is a loop that takes the previous value and recalculate the funcion oos until I fill the 1,239 values of my matrix:
results[241] <- oos(alpha, rho, rv[240])
results[242] <- oos(alpha, rho, rv[241])
results[243] <- oos(alpha, rho, rv[242])
results[244] <- oos(alpha, rho, rv[243])
Does anyone have an idea?
Thank you very much !
You can write it in a simple for-loop:
for (i in 241:length(results)) {
results[i] <- oos(alpha, rho, rv[i-1])
}
Related
I have constructed a discrete time SIR model using a loop within a function (i have added my code below).
Currently the results of the iterations are coming out as a list which seems to show all the S values first followed by the I values and then the R values, which I have deduced myself from the nature of the values.
I need the output as a data frame with the column names: 'Iteration', 'S', 'I' and 'R' from left to right and the corresponding values underneath such that when a row is read it will tell you the iteration and values of S, I and R at that iteration.
I do not know how to construct a data frame that and returns the output values in this way, I have only started learning R a few weeks ago and so am not yet proficient so any help would be HUGELY appreciated.
Thank you in advance.
#INITIAL CONDITIONS
S=999
I=1
R=0
#PARAMETERS
beta = 0.003 # infectious contact rate (/person/day)
gamma = 0.2 # recovery rate (/day)
#SIR MODEL WITH POISSON SAMPLING
discrete_SIR_model <- function(){
for(i in 1:30){ #the number of iterations of loop indicates the
#duration of the model in days
# i.e. 'i in 1:30' constitutes 30 days
deltaI<- rpois(1,beta * I * S) #rate at which individuals in the
#population are becoming infected
deltaR<-rpois(1,gamma * I)#rate at which infected individuals are
#recovering
S[i+1]<-S[i] -deltaI
I[i+1] <-I[i] + deltaI -deltaR
R[i+1]<-R[i]+deltaR
}
}
output <- list(c(S, I, R))
output
If a foor loop is used, one can define vectors or a data frame beforehand where the results are stored:
beta <- 0.001 # infectious contact rate (/person/day)
gamma <- 0.2 # recovery rate (/day)
S <- I <- R <- numeric(31)
S[1] <- 999
I[1] <- 1
R[1] <- 0
set.seed(123) # makes the example reproducible
for(i in 1:30){
deltaI <- rpois(1, beta * I[i] * S[i])
deltaR <- rpois(1, gamma * I[i])
S[i+1] <- S[i] - deltaI
I[i+1] <- I[i] + deltaI - deltaR
R[i+1] <- R[i] + deltaR
}
output <- data.frame(S, I, R)
output
matplot(output)
As an alternative, it is also possible to employ a package for this. Package deSolve is intended for differential equations, but it can also solve the discrete case with method "euler":
library(deSolve)
discrete_SIR_model <- function(t, y, p) {
with(as.list(c(y, p)), {
deltaI <- rpois(1, beta * I * S)
deltaR <- rpois(1, gamma * I)
list(as.double(c(-deltaI, deltaI - deltaR, deltaR)))
})
}
y0 <- c(S = 999.0, I=1, R=0)
p <- c(
beta = 0.001, # infectious contact rate (/person/day)
gamma = 0.2 # recovery rate (/day)
)
times <- 1:30
set.seed(576) # to make the example reproducible
output <- ode(y0, times, discrete_SIR_model, p, method="euler")
plot(output, mfrow=c(1,3))
Note: I reduced beta, otherwise the discrete model would become unstable.
Hey guys I'm trying to write an R function that computes a summation. Specifically program the following formula:
formula
This is what I have but i can't figure out why it won't work or what I am doing wrong. Note this function resembles the Adamic Adar scoring coefficient.
Please note that the original data is called "fblog" and has 192 vertices.
#Scoring function
library(sand)
nv <- vcount(fblog)
ncn2 <- numeric()
upgrade_graph(fblog)
A2 <- get.adjacency(fblog)
for(i in (1:(nv-1))){
ni <- neighborhood(fblog, 1, i)
nj <- neighborhood(fblog, 1, (i+1):nv)
nbhd.ij <- mapply(intersect, ni, nj, SIMPLIFY=FALSE)
for(i in unlist(nbhd.ij)) {
k_deg = unlist(lapply(nbhd.ij, length))
temp = 1/(log(k_deg))
}
ncn2 <- c(ncn2, temp)
}
I am following up an old question without answer (https://stackoverflow.com/questions/31653029/r-thresholding-networks-with-inputted-p-values-in-q-graph). I'm trying to assess relations between my variables.For this, I've used a correlation network map. Once I did so, I would like to implement a significance threshold component. For instance, I want to only show results with p-values <0.05. Any idea about how could I implement my code?
Data set: https://www.dropbox.com/s/xntc3i4eqmlcnsj/d100_partition_all3.csv?dl=0
My code:
library(qgraph)
cor_d100_partition_all3<-cor(d100_partition_all3)
qgraph(cor_d100_partition_all3, layout="spring",
label.cex=0.9, labels=names(d100_partition_all3),
label.scale=FALSE, details = TRUE)
Output:
Additionally, I have this small piece of code that transform R2 values into p.values:
Code:
cor.mtest <- function(mat, ...) {
mat <- as.matrix(mat)
n <- ncol(mat)
p.mat<- matrix(NA, n, n)
diag(p.mat) <- 0
for (i in 1:(n - 1)) {
for (j in (i + 1):n) {
tmp <- cor.test(mat[, i], mat[, j], ...)
p.mat[i, j] <- p.mat[j, i] <- tmp$p.value
}
}
colnames(p.mat) <- rownames(p.mat) <- colnames(mat)
p.mat
}
p.mat <- cor.mtest(d100_partition_all3)
Cheers
There are a few ways to only plot the significant correlations. First, you could pass additional arguments to the qgraph()function. You can look at the documentation for more details. The function call given below should have values that are close to what is needed.
qgraph(cor_d100_partition_all3
, layout="spring"
, label.cex=0.9
, labels=names(d100_partition_all3)
, label.scale=FALSE
, details = TRUE
, minimum='sig' # minimum based on statistical significance
,alpha=0.05 # significance criteria
,bonf=F # should Bonferroni correction be used
,sampleSize=6 # number of observations
)
A second option is to create a modified correlation matrix. When the correlations are not statistically significant based on your cor.mtest() function, the value is set to NA in the modified correlation matrix. This modified matrix is plotted. A main visual difference between the first and second solutions seems to be the relative line weights.
# initializing modified correlation matrix
cor_d100_partition_all3_mod <- cor_d100_partition_all3
# looping through all elements and setting values to NA when p-values is greater than 0.05
for(i in 1:nrow(cor_d100_partition_all3)){
for(j in 1:nrow(cor_d100_partition_all3)){
if(p.mat[i,j] > 0.05){
cor_d100_partition_all3_mod[i,j] <- NA
}
}
}
# plotting result
qgraph(cor_d100_partition_all3_mod
,layout="spring"
,label.cex=0.7
,labels=names(d100_partition_all3)
,label.scale=FALSE
,details = F
)
I am modeling the population change in a food web of species, using ODE and deSolve in R. obviously the populations should not be less than zero. therefore I have added an event function and run it as below. although the answers change from when I did nlt used event function, but it still producds negative values. What is wrong?
#using events in a function to distinguish and address the negative abundances
eventfun <- function(t, y, parms){
y[which(y<0)] <- 0
return(y)
}
# =============================== main code
max.time = 100
start.time = 50
initials <- c(N, R)
#parms <- list(webs=webs, a=a, b=b, h=h, m=m, basals=basals, mu=mu, Y=Y, K=K, no.species=no.species, flow=flow,S=S, neighs=neighs$neighs.per, dispers.maps=dispers.maps)
temp.abund <- ode(y=initials, func=solve.model, times=0:max.time, parms=parms, events = list(func = eventfun, time = 0:max.time))
and here is the ODE function(if it helps in finding the problem):
solve.model <- function(t, y, parms){
y <- ifelse(y<1e-6, 0, y)
with(parms,{
# return from vector form into matrix form for calculations
(R <- as.matrix(y[(max(no.species)*length(no.species)+1):length(y)]))
(N <- matrix(y[1:(max(no.species)*length(no.species))], ncol=length(no.species)))
dy1 <- matrix(nrow=max(no.species), ncol=length(no.species))
dy2 <- matrix(nrow=length(no.species), ncol=1)
no.webs <- length(no.species)
for (i in 1:no.webs){
species <- no.species[i]
(abundance <- N[1:species,i])
adj <- as.matrix(webs[[i]])
a.temp <- a[1:species, 1:species]*adj
b.temp <- b[1:species, 1:species]*adj
h.temp <- h[1:species, 1:species]*adj
(sum.over.preys <- abundance%*%(a.temp*h.temp))
(sum.over.predators <- (a.temp*h.temp)%*%abundance)
#Calculating growth of basal
(basal.growth <- basals[,i]*N[,i]*(mu*R[i]/(K+R[i])-m))
# Calculating growth for non-basal species D
no.basal <- rep(1,len=species)-basals[1:species]
predator.growth<- rep(0, max(no.species))
(predator.growth[1:species] <- ((abundance%*%(a.temp*b.temp))/(1+sum.over.preys)-m*no.basal)*abundance)
predation <- rep(0, max(no.species))
(predation[1:species] <- (((a.temp*b.temp)%*%abundance)/t(1+sum.over.preys))*abundance)
(pop <- basal.growth + predator.growth - predation)
dy1[,i] <- pop
dy2[i] <- 0.0005 #to consider a nearly constant value for the resource
}
#Calculating dispersals .they can be easily replaced
# by adjacency maps of connections between food webs arbitrarily!
disp.left <- dy1*d*dispers.maps$left.immig
disp.left <- disp.left[,neighs[,2]]
disp.right <- dy1*d*dispers.maps$right.immig
disp.right <- disp.right[,neighs[,3]]
emig <- dy1*d*dispers.maps$emigration
mortality <- m*dy1
dy1 <- dy1+disp.left+disp.right-emig
return(list(c(dy1, dy2)))
})
}
thank you so much for your help
I have had success using a similar event function defined like this:
eventfun <- function(t, y, parms){
with(as.list(y), {
y[y < 1e-6] <- 0
return(y)
})
}
I am using a similar event function to the one posted by jjborrelli. I wanted to note that for me it is still showing the ode function returning negative values. However, when ode goes to calculate the next step, it is using 0, and not the negative value shown for the current step, so you can basically ignore the negative values and replace with zeros at the end of the simulation.
I have a time series problem that I could easily work out manually, only it would take kind of a long time since I have 4 different AR(2) processes and want to calculate at least 20 lags for each.
What I want to do is use the Yule Walker equation for rho as follows:
I have an auto regressive process of second order, AR(2). Phi(1) is 0.6 and Phi(2) is 0.4.
I want to calculate the correlation coefficients rho(k) for all lags up to k = 20.
So rho(0) would naturally be 1 and rho(-1) = rho(1). Therefore
rho(1) = phi(1) + phi(2)*rho(1)
rho(k) = phi(1)*rho(k-1) + phi(2)*rho(k-2)
Now I want to solve this in R, but I have no idea how to start, can anyone help me out here?
You can try my program in R languages,
In R Script:
AR2 <- function(Zt,tetha0,phi1,phi2,nlag)
{
n <- length(Zt)
Zbar <- mean(Zt)
Zt1 <- rep(Zbar,n)
for(i in 2:n){Zt1[i] <- Zt[i-1]}
Zt2 <- rep(Zbar,n)
for(i in 3:n){Zt1[i] <- Zt[i-2]}
Zhat <- tetha0+phi1*Zt1+phi2*Zt2
error <- Zt-Zhat
ACF(error,nlag)
}
ACF <- function(error,nlag)
{
n <- length(error)
rho <- rep(0,nlag)
for(k in 1:nlag)
{
a <- 0
b <- 0
for(t in 1:(n-k)){a <- a+(error[t]*error[t+k])}
for(t in 1:n){b <- b+(error[t]^2)}
rho[k] <- a/b
}
return(rho)
}
In R console:
Let you have a Zt series, tetha(0) = 0, phi(1) = 0.6, phi(2) = 0.4, and number of lag = 20
AR2(Zt,0,0.6,0.4,20)