Why is the first run of microbenchmark always the slowest? - r

When using Microbenchmark I have noticed that the first execution is always a lot slower than the rest. This effect was the same over different machines and with different functions. Does this have something to do with with the library or is this some kind of warmup that is to be expected?
library(microbenchmark)
X <- matrix(rnorm(100), nrow = 10)
microbenchmark(solve(X))$time
#[1] 82700 23700 18300 17700 19700 19100 16900 17500 17300 16600 16700 16700 18500 16900 17700 16900 17000 16200 17400 17000 16800 16600 17000 16700 16800 17100
#[27] 17300 17100 16800 17800 17400 18100 17400 18100 18000 16700 17400 17300 17000 16800 16400 17300 16700 16900 16900 16700 17200 17800 16600 17100 16800 17800
#[53] 17000 17200 17500 17200 17200 17300 17800 17600 17600 17200 16600 16700 16800 16600 16400 16500 17300 17600 16800 17600 16300 16800 17100 16500 16800 16700
#[79] 16300 16700 16300 16700 16800 16700 16400 17100 16400 17100 17000 18000 16600 16600 16600 16800 16700 16500 17600 19100 17400 16900

It has to do with the warm-up time, see help('microbenchmark'), section details, argument control:
The control list can contain the following entries:
order
[omited]
warmup
the number of warm-up iterations performed before the actual benchmark. These are used to estimate the timing overhead as well as spinning up the processor from any sleep or idle states it might be in. The default value is 2.
If you increase the number of warm-up iterations, the first run might not be the slowest, though it many times is.
library(microbenchmark)
set.seed(2020)
X <- matrix(rnorm(100), nrow = 10)
times <- microbenchmark(solve(X), control = list(warmup = 10))$time
times
# [1] 145229 72724 65333 65305 115715 63797 689113 72101 64830 66392
# [11] 65776 66619 65531 64765 65351 65605 65745 65106 64661 65790
# [21] 65435 64964 66138 65952 66893 65654 65585 75141 74666 69060
# [31] 72725 66650 65486 65894 66808 65381 66039 65959 64842 65029
# [41] 65673 66439 64394 70585 68899 73875 73180 67807 65891 65699
# [51] 64693 63679 65504 80190 66150 65048 64372 64842 65845 65144
# [61] 65543 65297 65485 64695 66580 64921 65453 64840 65559 65805
# [71] 64362 66098 65464 65227 64998 64007 65659 63919 64727 64796
# [81] 65231 64030 65871 65735 64217 65195 65181 65130 66015 63891
# [91] 63755 65274 65116 64573 64244 64214 64148 64457 65346 64228
Now see which is the first with order:
order(times, decreasing = TRUE)
# [1] 7 1 5 54 28 29 46 47 31 2 8 44 30 45 48 25 35 32 12
# [20] 65 42 10 55 23 72 37 89 38 24 34 49 83 59 70 20 11 17 84
# [39] 50 41 77 26 16 27 69 61 13 53 33 63 73 67 21 36 15 99 3
# [58] 4 62 92 81 74 86 87 60 88 93 18 56 40 75 22 66 39 58 68
# [77] 9 80 14 79 64 51 19 94 98 43 57 71 95 100 85 96 97 82 76
# [96] 78 90 6 91 52
In this case the slowest was the seventh run, not the first.

Related

Gompertz-Makeham parameter estimation

I would like estimate the parameters of the Gompert-Makeham distribution, but I haven't got a result.
I would like a method in R, like this Weibull parameter estimation code:
weibull_loglik <- function(parm){
gamma <- parm[1]
lambda <- parm[2]
loglik <- sum(dweibull(vec, shape=gamma, scale=lambda, log=TRUE))
return(-loglik)
}
weibull <- nlm(weibull_loglik,parm<-c(1,1), hessian = TRUE, iterlim=100)
weibull$estimate
c=weibull$estimate[1];b=weibull$estimate[2]
My data:
[1] 872 52 31 26 22 17 11 17 17 8 20 12 25 14 17
[16] 20 17 23 32 37 28 24 43 40 34 29 26 32 34 51
[31] 50 67 84 70 71 137 123 137 172 189 212 251 248 272 314
[46] 374 345 411 494 461 505 506 565 590 535 639 710 733 795 786
[61] 894 963 1019 1149 1185 1356 1354 1460 1622 1783 1843 2049 2262 2316 2591
[76] 2730 2972 3187 3432 3438 3959 3140 3612 3820 3478 4054 3587 3433 3150 2881
[91] 2639 2250 1850 1546 1236 966 729 532 375 256 168 107 65 39 22
[106] 12 6 3 2 1 1
summary(vec)
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.0 32.0 314.0 900.9 1355.0 4054.0
It would be nice to have a reproducible example, but something like:
library(bbmle)
library(eha)
set.seed(101)
vec <- rmakeham(1000, shape = c(2,3), scale = 2)
dmwrap <- function(x, shape1, shape2, scale, log) {
res <- try(dmakeham(x, c(shape1, shape2), scale, log = log), silent = TRUE)
if (inherits(res, "try-error")) return(NA)
res
}
m1 <- mle2(y ~ dmwrap(shape1, shape2, scale),
start = list(shape1=1,shape2=1, scale=1),
data = data.frame(y = vec),
method = "Nelder-Mead"
)
Define a wrapper that (1) takes shape parameters as separate values; (2) returns NA rather than throwing an error when e.g. parameters are negative
Use Nelder-Mead rather than default BFGS for robustness
the fitdistrplus package might help too
if you're going to do a lot of this it may help to fit parameters on the log scale (i.e. use parameters logshape1, etc., and use exp(logshape1) etc. in the fitting formula)
I had to work a little harder to fit your data; I scaled the variable by 1000 (and found that I could only compute the log-likelihood; the likelihood gave an error that I didn't bother trying to track down). Unfortunately, it doesn't look like a great fit (too many small values).
x <- scan(text = "872 52 31 26 22 17 11 17 17 8 20 12 25 14 17
20 17 23 32 37 28 24 43 40 34 29 26 32 34 51
50 67 84 70 71 137 123 137 172 189 212 251 248 272 314
374 345 411 494 461 505 506 565 590 535 639 710 733 795 786
894 963 1019 1149 1185 1356 1354 1460 1622 1783 1843 2049 2262 2316 2591
2730 2972 3187 3432 3438 3959 3140 3612 3820 3478 4054 3587 3433 3150 2881
2639 2250 1850 1546 1236 966 729 532 375 256 168 107 65 39 22
12 6 3 2 1 1")
m1 <- mle2(y ~ dmwrap(shape1, shape2, scale),
start = list(shape1=1,shape2=1, scale=10000),
data = data.frame(y = x/1000),
method = "Nelder-Mead"
)
cc <- as.list(coef(m1))
png("gm.png")
hist(x,breaks = 25, freq=FALSE)
with(cc,
curve(exp(dmwrap(x/1000, shape1, shape2, scale, log = TRUE))/1000, add = TRUE)
)
dev.off()

Trouble with character column from a file read in with read.csv in r

On the website:
http://naturalstattrick.com/teamtable.php?season=20172018&stype=2&sit=pp&score=all&rate=n&vs=all&loc=B&gpf=82&fd=2017-10-04&td=2018-04-07
the bottom of the page there is an option to download csv. I downloaded the csv file and renamed it Team Season Totals - Natural Stat Trick 2007-2008 5 vs 5 (Counts).csv. I also put the csv file in my directory.
I successfully read in the file using read.csv.
teams <- read.csv(file = "Team Season Totals - Natural Stat Trick 2007-2008 5 vs 5 (Counts).csv", stringsAsFactors = FALSE)
head(teams)
ï.. Team GP TOI W L OTL ROW CF CA CF. FF FA FF. SF SA SF. GF GA GF. SCF SCA SCF. SCGF SCGA SCGF. SCSH.
1 1 Atlanta Thrashers 82 3539.050 34 40 8 25 2638 3512 42.89 2002 2717 42.42 1505 2052 42.31 125 172 42.09 1195 1500 44.34 83 126 39.71 6.95
2 2 Pittsburgh Penguins 82 3435.417 47 27 8 40 2820 3380 45.48 2192 2542 46.30 1580 1812 46.58 142 122 53.79 1343 1374 49.43 112 90 55.45 8.34
3 3 Los Angeles Kings 82 3502.333 32 43 7 27 3008 3576 45.69 2306 2787 45.28 1649 1961 45.68 137 174 44.05 1049 1286 44.93 63 80 44.06 6.01
4 4 Montreal Canadiens 82 3475.183 47 25 10 42 3089 3601 46.17 2266 2603 46.54 1617 1863 46.47 144 138 51.06 1156 1221 48.63 62 61 50.41 5.36
5 5 Edmonton Oilers 82 3442.633 41 35 6 26 2958 3424 46.35 2255 2585 46.59 1601 1830 46.66 143 166 46.28 1334 1398 48.83 104 116 47.27 7.80
6 6 Philadelphia Flyers 82 3374.800 42 29 11 39 2902 3343 46.47 2188 2505 46.62 1609 1857 46.42 125 137 47.71 919 1028 47.20 61 68 47.29 6.64
SCSV. HDCF HDCA HDCF. HDGF HDGA HDGF. HDSH. HDSV. SH. SV. PDO
1 91.60 388 468 45.33 51 82 38.35 13.14 82.48 8.31 91.62 0.999
2 93.45 503 444 53.12 79 49 61.72 15.71 88.96 8.99 93.27 1.023
3 93.78 270 356 43.13 29 36 44.62 10.74 89.89 8.31 91.13 0.994
4 95.00 271 322 45.70 25 31 44.64 9.23 90.37 8.91 92.59 1.015
5 91.70 443 452 49.50 57 61 48.31 12.87 86.50 8.93 90.93 0.999
6 93.39 257 266 49.14 24 24 50.00 9.34 90.98 7.77 92.62 1.004
The one thing I noticed was the Team Column had a accent in it:
teams$Team
[1] "Atlanta Thrashers" "Pittsburgh Penguins" "Los Angeles Kings" "Montreal Canadiens" "Edmonton Oilers" "Philadelphia Flyers"
[7] "St Louis Blues" "Colorado Avalanche" "Vancouver Canucks" "Minnesota Wild" "Florida Panthers" "Phoenix Coyotes"
[13] "Tampa Bay Lightning" "Buffalo Sabres" "Chicago Blackhawks" "New York Islanders" "Nashville Predators" "Anaheim Ducks"
[19] "Boston Bruins" "Ottawa Senators" "Dallas Stars" "Toronto Maple Leafs" "Carolina Hurricanes" "Columbus Blue Jackets"
[25] "New Jersey Devils" "Calgary Flames" "San Jose Sharks" "New York Rangers" "Washington Capitals" "Detroit Red Wings"
Removing the accent:
teams$Team <- sub(pattern = "Â", replacement = "", teams$Team)
teams$Team[1]
[1] "Atlanta Thrashers"
Now when I want to subset the data based on Team, all the values come back FALSE:
teams$Team[1]
[1] "Atlanta Thrashers"
teams$Team[1] == "Atlanta Thrashers"
[1] FALSE
dplyr::filter(teams, Team == "Atlanta Thrashers")
[1] ï.. Team GP TOI W L OTL ROW CF CA CF. FF FA FF. SF SA SF. GF GA GF. SCF SCA SCF. SCGF SCGA
[26] SCGF. SCSH. SCSV. HDCF HDCA HDCF. HDGF HDGA HDGF. HDSH. HDSV. SH. SV. PDO
<0 rows> (or 0-length row.names)
It comes back FALSE for every team and I don't understand why? Something with the accent that I removed? Does it have to do something with encoding, i.e., utf-8? If someone could please assist me I would appreciate it. Thanks.
I figured it out. I had to do with the accent. I used:
iconv(teams$Team,, "UTF-8", "UTF-8",sub=' ')
iconv(teams$Team, "UTF-8", "UTF-8",sub=' ')[1] == "Atlanta Thrashers"
[1] TRUE
I never had that happen to me and have no experience with encoding and utf-8.

Converting Data frame to time series in R

I currently have this data set below, but I am unsure as to how I can convert this into a time series from the data frame format that it is currently in.
I am also unsure as to how I can split this data up to create an in-sample and out-of-sample data set for forecasting.
Date Observations
1 1975/01 5172
2 1975/02 6162
3 1975/03 6979
4 1975/04 5418
5 1976/01 4801
6 1976/02 5849
7 1976/03 6292
8 1976/04 5261
9 1977/01 4461
10 1977/02 5322
11 1977/03 6153
12 1977/04 5377
13 1978/01 4808
14 1978/02 5845
15 1978/03 6023
16 1978/04 5691
17 1979/01 4683
18 1979/02 5663
19 1979/03 6068
20 1979/04 5429
21 1980/01 4897
22 1980/02 5685
23 1980/03 5862
24 1980/04 4663
25 1981/01 4566
26 1981/02 5118
27 1981/03 5261
28 1981/04 4459
29 1982/01 4352
30 1982/02 4995
31 1982/03 5559
32 1982/04 4823
33 1983/01 4462
34 1983/02 5228
35 1983/03 5997
36 1983/04 4725
37 1984/01 4223
38 1984/02 4940
39 1984/03 5780
40 1984/04 5232
41 1985/01 4723
42 1985/02 5219
43 1985/03 5855
44 1985/04 5613
45 1986/01 4987
46 1986/02 6117
47 1986/03 5777
48 1986/04 5803
49 1987/01 5113
50 1987/02 6298
51 1987/03 7152
52 1987/04 6591
53 1988/01 6337
54 1988/02 6672
55 1988/03 7224
56 1988/04 6296
57 1989/01 6957
58 1989/02 7538
59 1989/03 8022
60 1989/04 7216
61 1990/01 6633
62 1990/02 7355
63 1990/03 7897
64 1990/04 7159
65 1991/01 6637
66 1991/02 7629
67 1991/03 8080
68 1991/04 7077
69 1992/01 7190
70 1992/02 7396
71 1992/03 7795
72 1992/04 7147

Error In R code in LPPL model in R

I am learning R and had problem when I try run LPPL using nls. I used monthly data of KLSE.
> library(tseries)
> library(zoo)
ts<-read.table(file.choose(),header=TRUE)
ts
rdate Close Date
1 8/1998 302.91 0
2 9/1998 373.52 100
3 10/1998 405.33 200
4 11/1998 501.47 300
5 12/1998 586.13 400
6 1/1999 591.43 500
7 2/1999 542.23 600
8 3/1999 502.82 700
9 4/1999 674.96 800
10 5/1999 743.04 900
11 6/1999 811.10 1000
12 7/1999 768.69 1100
13 8/1999 767.06 1200
14 9/1999 675.45 1300
15 10/1999 742.87 1400
16 11/1999 734.66 1500
17 12/1999 812.33 1600
18 1/2000 922.10 1700
19 2/2000 982.24 1800
20 3/2000 974.38 1900
21 4/2000 898.35 2000
22 5/2000 911.51 2100
23 6/2000 833.37 2200
24 7/2000 798.83 2300
25 8/2000 795.84 2400
26 9/2000 713.51 2500
27 10/2000 752.36 2600
28 11/2000 729.95 2700
29 12/2000 679.64 2800
30 1/2001 727.73 2900
31 2/2001 709.39 3000
32 3/2001 647.48 3100
33 4/2001 584.50 3200
34 5/2001 572.88 3300
35 6/2001 592.99 3400
36 7/2001 659.40 3500
37 8/2001 687.16 3600
38 9/2001 615.34 3700
39 10/2001 600.07 3800
40 11/2001 638.02 3900
41 12/2001 696.09 4000
42 1/2002 718.82 4100
43 2/2002 708.91 4200
44 3/2002 756.10 4300
45 4/2002 793.99 4400
46 5/2002 741.76 4500
47 6/2002 725.44 4600
48 7/2002 721.59 4700
49 8/2002 711.36 4800
50 9/2002 638.01 4900
51 10/2002 659.57 5000
52 11/2002 629.22 5100
53 12/2002 646.32 5200
54 1/2003 664.77 5300
55 2/2003 646.80 5400
56 3/2003 635.72 5500
57 4/2003 630.37 5600
58 5/2003 671.46 5700
59 6/2003 691.96 5800
60 7/2003 720.56 5900
61 8/2003 743.30 6000
62 9/2003 733.45 6100
63 10/2003 817.12 6200
64 11/2003 779.28 6300
65 12/2003 793.94 6400
66 1/2004 818.94 6500
67 2/2004 879.24 6600
68 3/2004 901.85 6700
69 4/2004 838.21 6800
70 5/2004 810.67 6900
71 6/2004 819.86 7000
72 7/2004 833.98 7100
73 8/2004 827.98 7200
74 9/2004 849.96 7300
75 10/2004 861.14 7400
76 11/2004 917.19 7500
77 12/2004 907.43 7600
78 1/2005 916.27 7700
79 2/2005 907.38 7800
80 3/2005 871.35 7900
81 4/2005 878.96 8000
82 5/2005 860.73 8100
83 6/2005 888.32 8200
84 7/2005 937.39 8300
85 8/2005 913.56 8400
86 9/2005 927.54 8500
87 10/2005 910.76 8600
88 11/2005 896.13 8700
89 12/2005 899.79 8800
90 1/2006 914.01 8900
91 2/2006 928.94 9000
92 3/2006 926.63 9100
93 4/2006 949.23 9200
94 5/2006 927.78 9300
95 6/2006 914.69 9400
96 7/2006 935.85 9500
97 8/2006 958.12 9600
98 9/2006 967.55 9700
99 10/2006 988.30 9800
100 11/2006 1080.66 9900
101 12/2006 1096.24 10000
102 1/2007 1189.35 10100
103 2/2007 1196.45 10200
104 3/2007 1246.87 10300
105 4/2007 1322.25 10400
106 5/2007 1346.89 10500
107 6/2007 1354.38 10600
108 7/2007 1373.71 10700
109 8/2007 1273.93 10800
110 9/2007 1336.30 10900
111 10/2007 1413.65 11000
112 11/2007 1396.98 11100
113 12/2007 1445.03 11200
df <- data.frame(ts)
df <- data.frame(Date=df$Date,Y=df$Close)
df <- df[!is.na(df$Y),]
library(minpack.lm)
library(ggplot2)
f <- function(pars, xx){pars$a+pars$b*(pars$tc-xx)^pars$m* (1+pars$c*cos(pars$omega*log(pars$tc-xx)+pars$phi))}
resids <- function(p,observed,xx){df$Y-f(p,xx)}
nls.out<-nls.lm(par=list(a=7.048293, b=-8.8e-5, tc=112000, m=0.5, omega=3.03, phi=-9.76, c=-14), fn=resids, observed=df$Y, xx=df$days, control=nls.lm.control(maxiter=1024, ftol=1e-6, maxfev=1e6))
par <- nls.out$par
nls.final<-nls(Y~a+(tc-days)^m*(b+c*cos(omega*log(tc-days)+phi)), data=df, start=par, algorithm="plinear", control=nls.control(maxiter=1024, minFactor=1e-8))
Error in qr.solve(QR.B, cc) : singular matrix 'a' in solve
I got error a singular matrix.What I need to change to avoid this error?
Your problem is: the cosine term is zero for some value, this makes the matrix singular, you basically need to constrict the parameter space. Additionally, I would read more of the literature since some fancy trig work will remove the phi parameter, this improves the nl optimization enough to get useful and reproducible results.

Create a column according to the levels of a vector

I have a data frame with a column (species) presenting 153 levels of a factor
> out80[1:10,1:3]
Species Plots100 Plots80
1 02 901 2091
2 03 921 2094
3 04 29 60
4 05 1255 2145
5 06 563 850
6 07 38 53
7 08S 102 144
8 09 897 1734
9 10 503 1084
10 11 134 334
What I would like to do is look for this level of the factor in another column (code)of another data frame(species.tab2) and simply create another column in out80 with the name associated with this level from the column French name
> head(species.tab2[,1:3])
var code French_name
1 ESPAR 2 CHENE PEDONCULE
2 ESPAR 3 CHENE SESSILE
3 ESPAR 3 CHENE SESSILE
4 ESPAR 3 CHENE SESSILE
5 ESPAR 4 CHENE ROUGE
6 ESPAR 5 CHENE PUBESCENT
I have tried doing it with ifelse or with a loop but I can't get it to work.
So the result would be something like this:
Species Plots100 Plots80 Name
1 02 901 2091 CHENE PEDONCULE
2 03 921 2094 CHENE SESSILE
etc...
EDIT: Here are the levels:
> out80$Species
[1] 02 03 04 05 06 07 08S 09 10 11 12P 12V 13B 13C 13G 14 15P 15S 16
[20] 17C 17F 17O 18C 18D 18M 19 20G 20P 20X 21C 21M 21O 22C 22G 22M 22S 23A 23AB
[39] 23AF 23AM 23C 23F 23PA 23PC 23PD 23PF 23PM 23SO 23SS 24 25B 25C 25FD 25FR 25M 25R 25V
[58] 26E 26OC 27C 27N 28 29AF 29AI 29CM 29EN 29LI 29MA 29MI 31 32 33B 33G 33N 34 36
[77] 37 38AL 38AU 39 40 41 42 49AA 49AE 49AM 49BO 49BS 49C 49CA 49CS 49EA 49EV 49FL 49IA
[96] 49LN 49MB 49PC 49PL 49PM 49PS 49PT 49RA 49RC 49RP 49RT 49SN 49TF 49TG 51 52 53CA 53CO 53S
[115] 54 55 56 57A 57B 58 59 61 62 63 64 65 66 67 68CC 68CE 68CJ 68CL 68CM
[134] 68EO 68PC 68PM 68SC 68SV 68TG 68TH 69 69JC 69JO 70SB 70SC 70SE 71 72V 73 74H 74J 76
[153] 77
> species.tab2$code
[1] 2 3 3 3 4 5 5 5 6 6 6 7 08S 9 10 10 11 12P 12V
[20] 12V 13B 13C 13G 14 14 14 15P 15S 15S 16 17C 17F 17O 17O 18C 18C 18D 18D
[39] 18M 19 19 20G 20P 20X 21C 21M 21O 22C 22G 22G 22M 22S 23A 23A 23AB 23AF 23AM
[58] 23C 23F 23PA 23PA 23PC 23PD 23PF 23PM 23SO 24 25B 25C 25D 25E3 25FR 25M 25R 25V 26E
[77] 26E 26OC 27C 27N 28 29AI 29CM 29EN 29MA 29MI 29LI 31 32 33B 33G 33N 34 36 37
[96] 38AU 38AL 39 40 41 42 49AA 49AE 49AM 49BO 49BO 49BS 49C 49CA 49CS 49EA 49EV 49FL 49IA
[115] 49LN 49MB 49PC 49PL 49PM 49PS 49PT 49RA 49RC 49RP 49RT 49SN 49TF 49TG 51 52 53CA 53CO 53S
[134] 54 55 56 57A 57B 58 59 61 62 63 64 65 66 67 68CC 68CJ 68CL 68CM 68EO
[153] 68PC 68PM 68SC 68SV 68TG 68TH 69 69JC 69JO 70SB 70SC 70SE 71 72V 73 74H 74J 76 77
There are some repetition in code just due to the fact that for a same code, there are 2 or 3 different French names existing. For these I just want one of the name, doesn't matter which one it is.
Thank you for your help.
Using merge , after creating a new column code in out80
out80$code <- gsub('^0|S$','',out80$Species)
merge(out80,species.tab2)
code Species Plots100 Plots80 var French_name
1 2 02 901 2091 ESPAR CHENE PEDONCULE
2 3 03 921 2094 ESPAR CHENE SESSILE
3 3 03 921 2094 ESPAR CHENE SESSILE
4 3 03 921 2094 ESPAR CHENE SESSILE
5 4 04 29 60 ESPAR CHENE ROUGE
6 5 05 1255 2145 ESPAR CHENE PUBESCENT
EDIT
Code and Species doesn't match for levels 01,02,...., so I create a new column to match them.
gsub('^0([0-9])$','\\1',out80$Species)
A data.table solution:
require(data.table)
dt1 <- data.table(out80)
# positive look ahead
# match 0's at beginning followed by numbers
# if found, replace all beginning 0's with ""
dt1[, key := sub("^[0]+(?=[0-9]+$)", "", Species, perl=T)]
setkey(dt1, "key")
dt2 <- data.table(species.tab2)
dt2[, code := as.character(code)]
dt2[, key := sub("^[0]+(?=[0-9]+$)", "", code, perl=T)]
setkey(dt2, "key")
merge(dt1, dt2)
# key Species Plots100 Plots80 var code French_name
# 1: 2 02 901 2091 ESPAR 2 CHENE_PEDONCULE
# 2: 3 03 921 2094 ESPAR 3 CHENE_SESSILE
# 3: 3 03 921 2094 ESPAR 3 CHENE_SESSILE
# 4: 3 03 921 2094 ESPAR 3 CHENE_SESSILE
# 5: 4 04 29 60 ESPAR 4 CHENE_ROUGE
# 6: 5 05 1255 2145 ESPAR 5 CHENE_PUBESCENT

Resources