Biomass model with density - r

I am trying to develop a common biomass model for several species with wood density.
here is my data set
Species_Name DBH_cm Wood_Density Leaf_Biomass_kg
Aam 10.9 0.55 4.495175666
Aam 8.3 0.55 3.003987585
Aam 18.3 0.55 7.0453234
Akashmoni 26.6 0.68 8.68327883
Akashmoni 18 0.68 5.514198965
Akashmoni 20.6 0.68 7.140993296
Amloki 13.7 0.64 0.418757191
Amloki 14.6 0.64 0.348964326
Amra 19 0.29 0
Arjun 13.3 0.82 0
Bajna 13 0.70 0
Bel 19.6 0.83 0.458638794
Sal 14.40 0.82 0.996750392
Sal 12.20 0.82 0.644956136
Sal 10.00 0.82 0.947928706
Sal 14.20 0.82 0.767434214
Sal 11.50 0.82 0.636970398
Sal 13.20 0.82 0.445111844
Sal 13.30 0.82 0.706039477
Sal 10.70 0.82 0.475809213
I tried to give NA to missing values by using
tree[which(tree$Leaf_Biomass_kg == 0),]$Leaf_Biomass_kg <- NA
my model code is
library(nlme)
start <- coef(lm(log(Leaf_Biomass_kg)~log(DBH_cm)+log(Wood_Density), data=tree))
start[1] <- exp(start[1])
names(start) <- c("a","b1", "b2")
m <- nlme(Leaf_Biomass_kg~a*DBH_cm^b1*Wood_Density^b2,
data=cbind(tree,g="a"),
fixed=a+b1+b2~1,
start=start,
groups=~g,
weights=varPower(form=~DBH_cm))
it gives
Error in finiteDiffGrad(model, data, pars) :
NAs are not allowed in subscripted assignments
Can anyone help me in this regard
I add na.action=na.exclude in my model but the problem still exists
m <- nlme(Leaf_Biomass_kg~a*DBH_cm^b1*Wood_Density^b2,
data=cbind(tree,g="a"),
fixed=a+b1+b2~1,
start=start,
groups=~g,
weights=varPower(form=~DBH_cm)
na.action=na.exclude)

You have missing values NA in your data set. Try setting na.exclude or remove records with NA values. You may want to impute missing values. This question has been adressed here:
https://stats.stackexchange.com/questions/11000/how-does-r-handle-missing-values-in-lm

Related

Time-series average of cross-sectional correlations

I have a panel dataset looking like this:
head(panel_data)
date symbol close rv rv_plus rv_minus rskew rkurt Mkt.RF SMB HML
1 1999-11-19 a 25.4 19.3 6.76 12.6 -0.791 4.36 -0.11 0.35 -0.5
2 1999-11-22 a 26.8 10.1 6.44 3.69 0.675 5.38 0.02 0.22 -0.92
3 1999-11-23 a 25.2 8.97 2.56 6.41 -1.04 4.00 -1.29 0.08 0.3
4 1999-11-24 a 25.6 5.81 2.86 2.96 -0.505 5.45 0.87 0.08 -0.89
5 1999-11-26 a 25.6 2.78 1.53 1.25 0.617 5.60 0.23 0.92 -0.2
6 1999-11-29 a 26.1 5.07 2.76 2.30 -0.236 7.27 -0.6 0.570 -0.14
where the variable symbol depicts different stocks. I want to calculate the time-series average of the cross-sectional correlation between the variables rskew and rkurt. This means I need to compute the correlation between rskew and rkurt over all different stocks at each point in time and then calculate the time-series average afterwards.
I tried to do it with the rollapply function from the zoo package, but since the number of different stocks is not the same for all dates, I cannot simply define width as an integer. Here is what i tried for a sample width of 20:
panel_data <- panel_data %>%
group_by(date) %>%
mutate(cor_skew_kurt = rollapply(data = panel_data[7:8],
width=20,
FUN=cor,
align="right",
na.rm=TRUE,
fill=NA)) %>%
ungroup
Is there a way to do this without having to define a fixed width for each date group?
Or should I maybe use a different approach to do this?
[Edited] Can you try running the below code? I have recreated an example emulating your issue. if I understood your problem correctly this code should at least put you on the path to the right solution as it solves the issue of unequal time window length.
###################
#Recreating an example dataset with unequal dates across stocks
seed(1)
date6 <- c('1999-11-19','1999-11-22','1999-11-23','1999-11-24','1999-11-26','1999-11-29')
date5 <- c('1999-11-19','1999-11-22','1999-11-23','1999-11-24','1999-11-26')
date4 <- c('1999-11-19','1999-11-22','1999-11-23','1999-11-24')
cor_skew_kurt <- c(rep(NaN,21))
symbol <- c(rep('a',6),rep('b',5),rep('c',4),rep('d',6))
rskew <- rnorm(21,mean=1, sd =1)
rkurt <- rnorm(21, mean=5, sd = 1)
panel_data <- cbind.data.frame(date = c(date6,date5,date4,date6), symbol = symbol, rskew = rskew, rkurt = rkurt, cor_skew_kurt = cor_skew_kurt )
panel_data$date <- as.Date(panel_data$date, '%Y-%m-%d')
# Computing the cor_skew_kurt and filling the table <- ANSWER TO YOUR QUESTION
for (date in unique(panel_data$date))
{
panel_data[panel_data$date == date,"cor_skew_kurt"] <- as.double(cor(panel_data[panel_data$date == date,'rskew'],panel_data[panel_data$date == date,'rkurt']))
}

avoid nested for-loops for tricky operation R

I have to do an operation that involves two matrices, matrix #1 with data and matrix #2 with coefficients to multiply columns of matrix #1
matrix #1 is:
dim(dat)
[1] 612 2068
dat[1:6,1:8]
X0005 X0010 X0011 X0013 X0015 X0016 X0017 X0018
1 1.96 1.82 8.80 1.75 2.95 1.10 0.46 0.96
2 1.17 0.94 2.74 0.59 0.86 0.63 0.16 0.31
3 2.17 2.53 10.40 4.19 4.79 2.22 0.31 3.32
4 3.62 1.93 6.25 2.38 2.25 0.69 0.16 1.01
5 2.32 1.93 3.74 1.97 1.31 0.44 0.28 0.98
6 1.30 2.04 1.47 1.80 0.43 0.33 0.18 0.46
and matrix #2 is:
dim(lo)
[1] 2068 8
head(lo)
i1 i2 i3 i4 i5 i6
X0005 -0.11858852 0.10336788 0.62618771 0.08706041 -0.02733101 0.006287923
X0010 0.06405406 0.13692216 0.64813610 0.15750302 -0.13503956 0.139280709
X0011 -0.06789727 0.30473549 0.07727417 0.24907723 -0.05345123 0.141591330
X0013 0.20909664 0.01275553 0.21067894 0.12666704 -0.02836527 0.464548147
X0015 -0.07690560 0.18788859 -0.03551084 0.19120773 -0.10196578 0.234037820
X0016 -0.06442454 0.34993481 -0.04057001 0.20258195 -0.09318325 0.130669546
i7 i8
X0005 0.08571777 0.031531478
X0010 0.31170850 -0.003127279
X0011 0.52527759 -0.065002026
X0013 0.27858049 -0.032178156
X0015 0.50693977 -0.058003429
X0016 0.53162596 -0.052091767
I want to multiply each column of matrix#1 by its correspondent coefficient of matrix#2 first column, and sum up all resulting columns. Then repeat the operation but with coefficients of matrix#2 second column, then third column, and so on...
The result is then a matrix with 8 columns, which are lineal combinations of data in matrix#1
My attempt includes nested for-loops. it works, but takes about 30' to execute. Is there any way to avoid these loops and reduce computational effort?
here is my attempt:
r=nrow(dat)
n=ncol(dat)
m=ncol(lo)
eme<-matrix(NA,r,m)
for (i in(1:m)){
SC<-matrix(NA,r,n)
for (j in(1:n)){
nom<-rownames(lo)
x<-dat[ , colnames(dat) == nom[j]]
SC[,j]<-x*lo[j,i]
SC1<-rowSums(SC)
}
eme[,i]<-SC1
}
Thanks for your help
It looks like you are just doing matrix - vector multiplication. In R, use the%*% operator, so all the looping is delegated to a fortran routine. I think it equates to the following
apply(lo, 2, function(x) dat %*% x)
Your code could be improved by moving the nom <- assignment outside the loops since it recalculates the same thing every iteration. Also, what is the point of SC1 being computed during each iteration?

Ramp up/down missing time-series data in R

I have a set of time-series data (GPS speed data, specifically), which includes gaps of missing values where the signal was lost. For missing periods of short durations I am about to fill simply using a na.spline, however this is inappropriate with longer time periods. I would like to ramp the values from the last true value down to zero, based on predefined acceleration limits.
#create sample data frame
test <- as.data.frame(c(6,5.7,5.4,5.14,4.89,4.64,4.41,4.19,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,5,5.1,5.3,5.4,5.5))
names(test)[1] <- "speed"
#set rate of acceleration for ramp
ramp <- 6
#set sampling rate of receiver
Hz <- 1/10
So for missing data the ramp would use the previous value and the rate of acceleration to get the next data point, until speed reached zero (i.e. last speed [4.19] + (Hz * ramp)), yielding the following values:
3.59
2.99
2.39
1.79
1.19
0.59
0
Lastly, I need to do this in the reverse fashion, to ramp up from zero when the signal picks back up again.
Hope this is clear.
Cheers
It's not really elegant, but you can do it in a loop.
na.pos <- which(is.na(test$speed))
acc = FALSE
for (i in na.pos) {
if (acc) {
speed <- test$speed[i-1]+(Hz*ramp)
}
else {
speed <- test$speed[i-1]-(Hz*ramp)
if (round(speed,1) < 0) {
acc <- TRUE
speed <- test$speed[i-1]+(Hz*ramp)
}
}
test[i,] <- speed
}
The result is:
speed
1 6.00
2 5.70
3 5.40
4 5.14
5 4.89
6 4.64
7 4.41
8 4.19
9 3.59
10 2.99
11 2.39
12 1.79
13 1.19
14 0.59
15 -0.01
16 0.59
17 1.19
18 1.79
19 2.39
20 2.99
21 3.59
22 4.19
23 4.79
24 5.00
25 5.10
26 5.30
27 5.40
28 5.50
Note that '-0.01', because 0.59-(6*10) is -0.01, not 0. You can round it later, I decided not to.
When the question says "ramp the values from the last true value down to zero" in each run of NAs I assume that that means that any remaining NAs in the run after reaching zero are also to be replaced by zero.
Now, use rleid from data.table to create a grouping vector the same length as test$speed identifying each run in is.na(test$speed) and use ave to create sequence numbers within such groups, seqno. Then calculate the declining sequences, ramp_down by combining na.locf(test$speed) and seqno. Finally replace the NAs.
library(data.table)
library(zoo)
test_speed <- test$speed
seqno <- ave(test_speed, rleid(is.na(test_speed)), FUN = seq_along)
ramp_down <- pmax(na.locf(test_speed) - seqno * ramp * Hz, 0)
result <- ifelse(is.na(test_speed), ramp_down, test_speed)
giving:
> result
[1] 6.00 5.70 5.40 5.14 4.89 4.64 4.41 4.19 3.59 2.99 2.39 1.79 1.19 0.59 0.00
[16] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.00 5.10 5.30 5.40 5.50

Finding Sync of Columns in R

Data
v1 <- c(52.9799999999814, 53.4200000000128, 52.0899999999965, 57.9700000000012,
60.679999999993, 0.300000000017462, 1.76999999998952, 61.1900000000023,
58.9599999999919, 1.73000000001048, 0.269999999989523, 6.92000000001281,
60.5299999999988, 60.859999999986, 59.5599999999977, 61.0600000000268,
60.6299999999756, 60.9700000000012, 60.1600000000035, 60.4599999999919,
60.0900000000256)
v2 <- c(52.679999999993, 53.140000000014, 52.8899999999849, 57.6700000000128,
60.5199999999895, 2.04000000000815, 61.890000000014, 59.5699999999779,
2.05999999999767, 6.98000000001048, 60.7399999999907, 60.7799999999988,
59.7300000000105, 60.9100000000035, 60.3299999999872, 60.5500000000175,
60.6600000000035, 60.3499999999767, 60.7300000000105, 60.6700000000128,
60.3799999999756)
tv3 <- data.frame(v1,v2)
tv3$v5 <- tv3$v2 - tv3$v1
tv3$v5
[1] -0.30 -0.28 0.80 -0.30 -0.16 1.74 60.12 -1.62 -56.90 5.25 60.47 53.86 -0.80
[14] 0.05 0.77 -0.51 0.03 -0.62 0.57 0.21 0.29
So you see, the difference should remain smaller, if it is larger, like in this case, at particular row, it gets 60.
So basically if we remove the 0.30 row in just V1 and shift it one cell up, The difference wouldn't hike upto 60.
So the 0.30 is noise value and that's what I have to figure and put it in V3
My desired results are the following.
v1 v2 V3
52.98 52.68 0.3
53.42 53.14 0.27
52.09 52.89
57.97 57.67
60.68 60.52
1.77 2.04
61.19 61.89
58.96 59.57
1.73 2.06
6.92 6.98
60.53 60.74
60.86 60.78
59.56 59.73
61.06 60.91
60.63 60.33
60.97 60.55
60.16 60.66
60.46 60.35
60.09 60.73
So notice here that all the sequence of columns seem to be in sync with just a difference of few points.
May be my case requires implementation of Needleman-Wunsch Algo

R-- repeating linear regression in a large dataset

I'm an R newbie working with an annual time series dataset (named "timeseries"). The set has one column for year and another 600 columns with the yearly values for different locations ("L1," "L2", etc), e.g. similar to the following:
Year L1 L2 L3 L4
1963 0.63 0.23 1.33 1.41
1964 1.15 0.68 0.21 0.4
1965 1.08 1.06 1.14 0.83
1966 1.69 1.85 1.3 0.76
1967 0.77 0.62 0.44 0.96
I'd like to do a linear regression for each site and can use the following for a single site:
timeL1<-lm(L1~Year, data=timeseries)
summary(timeL1)
But I think there must be a way to automatically repeat this for all the locations. Ideally, I'd like to end up with two vectors of results-- one with the coefficients for all the locations and one with the p-values for all the locations. From some searching, I thought the plyr package might work, but I can't figure it out. I'm still learning the basics of R, so any suggestions would be appreciated.
You can do this with one line of code:
apply(df[-1], 2, function(x) summary(lm(x ~ df$Year))$coef[1,c(1,4)])
L1 L2 L3 L4
Estimate -160.0660000 -382.2870000 136.4690000 106.9820000
Pr(>|t|) 0.6069965 0.3886881 0.7340981 0.7030296
A combination of apply and lapply can accomplish this.
d <- read.table(text="Year L1 L2 L3 L4
1963 0.63 0.23 1.33 1.41
1964 1.15 0.68 0.21 0.4
1965 1.08 1.06 1.14 0.83
1966 1.69 1.85 1.3 0.76
1967 0.77 0.62 0.44 0.96", header=TRUE)
year <- d$Year
d <- d[,-1]
models<-apply(d, 2, function(x) lm(x ~ year))
summaries <- lapply(models, summary)
pvals <- lapply(lapply(summaries, coefficients), function(x) x[4])
coefs <- lapply(lapply(summaries, coefficients), function(x) x[1])

Resources