vis.gam transparent at top of "persp" graph - r

I am running a GAMM using package mgcv. The model is running fine and gives an output that makes sense, but when I use vis.gam(plot.type="persp") my graph appears like this:
enter image description here
Why is this happening? When I use vis.gam(plot.type="contour") there is no area which is transparent.
It appears to not simply be a problem with the heat color pallete; the same thing happens when I change the color scheme of the "persp" plot:
persp plot, "topo" colour
The contour plot is completely filled while the persp plot is still transparent at the top.
Data:
logcpue assnage distkm fsamplingyr
1 -1.5218399 7 3.490 2015
2 -1.6863990 4 3.490 2012
3 -1.4534337 6 3.490 2014
4 -1.5207723 5 3.490 2013
5 -2.4061258 2 3.490 2010
6 -2.5427262 3 3.490 2011
7 -1.6177367 3 3.313 1998
8 -4.4067192 10 3.313 2005
9 -4.3438054 11 3.313 2006
10 -2.8834031 7 3.313 2002
11 -2.3182512 2 3.313 1997
12 -4.1108738 1 3.235 2010
13 -2.0149030 3 3.235 2012
14 -1.4900912 6 3.235 2015
15 -3.7954892 2 3.235 2011
16 -1.6499840 4 3.235 2013
17 -1.9924302 5 3.235 2014
18 -1.2122716 4 3.189 1998
19 -0.6675703 3 3.189 1997
20 -4.7957905 7 3.106 1998
21 -3.8763958 6 3.106 1997
22 -1.2205021 4 3.073 2010
23 -1.9262374 7 3.073 2013
24 -3.3463891 9 3.073 2015
25 -1.7805862 2 3.073 2008
26 -3.2451931 8 3.073 2014
27 -1.4441139 5 3.073 2011
28 -1.4395389 6 3.073 2012
29 -1.6357552 4 2.876 2014
30 -1.3449091 5 2.876 2015
31 -2.3782225 3 2.876 2013
32 -4.4886364 1 2.876 2011
33 -2.6026897 2 2.876 2012
34 -3.5765503 1 2.147 2002
35 -4.8040211 9 2.147 2010
36 -1.3993664 5 2.147 2006
37 -1.2712250 4 2.147 2005
38 -1.8495790 7 2.147 2008
39 -2.5073795 1 2.034 2012
40 -2.0654553 4 2.034 2015
41 -3.6309855 2 2.034 2013
42 -2.2643639 3 2.034 2014
43 -2.2643639 6 1.452 2006
44 -3.3900241 8 1.452 2008
45 -4.9628446 2 1.452 2002
46 -2.0088240 5 1.452 2005
47 -3.9186675 1 1.323 2013
48 -4.3438054 2 1.323 2014
49 -3.5695327 3 1.323 2015
50 -1.6986690 7 1.200 2005
51 -3.2451931 8 1.200 2006
52 -0.9024016 4 1.200 2002
library(mgcv)
f1 <- formula(logcpue ~ s(assnage)+distkm)
m1 <- gamm(f1,random = list(fsamplingyr =~ 1),
method = "REML",
data =ycsnew)
vis.gam(m1$gam,color="topo",plot.type = "persp",theta=180)
vis.gam(m1$gam,color="heat",plot.type = "persp",theta=180)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="heat",las=1)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="terrain",las=1,contour.col="black")

The code of vis.gam has this:
surf.col[surf.col > max.z * 2] <- NA
I am unable to understand what it is doing and it appears to be rather ad_hoc. NA values of colors are generally transparent. If you comment out that line (and assign the environment of the new function as:
environment(vis.gam2) <- environment(vis.gam)
.... you get complete coloring of the surface.

Related

How to change grouped data in ungrouped data

I have grouped data that I want to convert to ungrouped data.
year<-c(rep(2014,4),rep(2015,4))
Age<-rep(c(22,23,24,25),2)
n<-c(1,1,3,2,0,2,3,1)
mydata<-data.frame(year,Age,n)
I would like to have a dataset like the one below created from the previous one.
year Age
1 2014 22
2 2014 23
3 2014 24
4 2014 24
5 2014 24
6 2014 25
7 2014 25
8 2015 23
9 2015 23
10 2015 24
11 2015 24
12 2015 24
13 2015 25
Try
mydata[rep(1:nrow(mydata),mydata$n),]
year Age n
1 2014 22 1
2 2014 23 1
3 2014 24 3
3.1 2014 24 3
3.2 2014 24 3
4 2014 25 2
4.1 2014 25 2
6 2015 23 2
6.1 2015 23 2
7 2015 24 3
7.1 2015 24 3
7.2 2015 24 3
8 2015 25 1
Here's a tidyverse solution:
library(tidyverse)
mydata %>%
uncount(n)
which gives:
year Age
1 2014 22
2 2014 23
3 2014 24
4 2014 24
5 2014 24
6 2014 25
7 2014 25
8 2015 23
9 2015 23
10 2015 24
11 2015 24
12 2015 24
13 2015 25
You can also use tidyr syntax for this:
library(tidyr)
year<-c(rep(2014,4),rep(2015,4))
Age<-rep(c(22,23,24,25),2)
n<-c(1,1,3,2,0,2,3,1)
mydata<-data.frame(year,Age,n)
uncount(mydata, n)
#> year Age
#> 1 2014 22
#> 2 2014 23
#> 3 2014 24
#> 4 2014 24
#> 5 2014 24
#> 6 2014 25
#> 7 2014 25
#> 8 2015 23
#> 9 2015 23
#> 10 2015 24
#> 11 2015 24
#> 12 2015 24
#> 13 2015 25
But of course you shouldn't use tidyr just because it is tidyr :) An alternate view of the Tidyverse "dialect" of the R language, and its promotion by RStudio.
We can use tidyr::complete
library(tidyr)
library(dplyr)
mydata %>% group_by(year, Age) %>%
complete(n = seq_len(n)) %>%
select(-n) %>%
ungroup()
# A tibble: 14 × 2
year Age
<dbl> <dbl>
1 2014 22
2 2014 23
3 2014 24
4 2014 24
5 2014 24
6 2014 25
7 2014 25
8 2015 23
9 2015 23
10 2015 24
11 2015 24
12 2015 24
13 2015 25
14 2015 22

Numerical Method for SARIMAX Model using R

My friend is currently working on his assignment about estimation of parameter of a time series model, SARIMAX (Seasonal ARIMA Exogenous), with Maximum Likelihood Estimation (MLE) method. The data used by him is about the monthly rainfall from 2000 - 2012 with Indian Ocean Dipole (IOD) index as the exogenous variable.
Here is data:
MONTH YEAR RAINFALL IOD
1 1 2000 15.3720526 0.0624
2 2 2000 10.3440804 0.1784
3 3 2000 14.6116392 0.3135
4 4 2000 18.6842179 0.3495
5 5 2000 15.2937896 0.3374
6 6 2000 15.0233152 0.1946
7 7 2000 11.1803399 0.3948
8 8 2000 11.0589330 0.4391
9 9 2000 10.1488916 0.3020
10 10 2000 21.1187121 0.2373
11 11 2000 15.3980518 -0.0324
12 12 2000 18.9393770 -0.0148
13 1 2001 19.1075901 -0.2448
14 2 2001 14.9097284 0.1673
15 3 2001 19.2379833 0.1538
16 4 2001 19.6900990 0.3387
17 5 2001 8.0684571 0.3578
18 6 2001 14.0463518 0.3394
19 7 2001 5.9916609 0.1754
20 8 2001 8.4439327 0.0048
21 9 2001 11.8321596 0.1648
22 10 2001 24.3700636 -0.0653
23 11 2001 22.3584436 0.0291
24 12 2001 23.6114379 0.1731
25 1 2002 17.8409641 0.0404
26 2 2002 14.7377067 0.0914
27 3 2002 21.2226294 0.1766
28 4 2002 16.6403125 -0.1512
29 5 2002 10.8074049 -0.1072
30 6 2002 6.3796552 0.0244
31 7 2002 17.0704423 0.0542
32 8 2002 1.7606817 0.0898
33 9 2002 5.3665631 0.6736
34 10 2002 8.3246622 0.7780
35 11 2002 17.8044938 0.3616
36 12 2002 16.7062862 0.0673
37 1 2003 13.5572859 -0.0628
38 2 2003 17.1113997 0.2038
39 3 2003 14.9899967 0.1239
40 4 2003 14.0996454 0.0997
41 5 2003 11.4017542 0.0581
42 6 2003 6.7749539 0.3490
43 7 2003 7.1484264 0.4410
44 8 2003 10.3004854 0.4063
45 9 2003 10.6630202 0.3289
46 10 2003 20.6518764 0.1394
47 11 2003 20.8638443 0.1077
48 12 2003 20.5548048 0.4093
49 1 2004 16.0436903 0.2257
50 2 2004 17.2568827 0.2978
51 3 2004 20.2361063 0.2523
52 4 2004 11.6619038 0.1212
53 5 2004 12.8296532 -0.3395
54 6 2004 8.4202138 -0.1764
55 7 2004 15.5916644 0.0118
56 8 2004 0.9486833 0.1651
57 9 2004 7.2732386 0.2825
58 10 2004 18.0083314 0.3747
59 11 2004 14.4672043 0.1074
60 12 2004 17.3637554 0.0926
61 1 2005 18.9420168 0.0551
62 2 2005 17.0146995 -0.3716
63 3 2005 23.3002146 -0.2641
64 4 2005 17.8689675 0.2829
65 5 2005 17.2365890 0.1883
66 6 2005 14.0178458 0.0347
67 7 2005 12.6925175 -0.0680
68 8 2005 9.3861600 -0.0420
69 9 2005 11.7132404 -0.1425
70 10 2005 18.5768673 -0.0514
71 11 2005 19.6723156 -0.0008
72 12 2005 18.3248465 -0.0659
73 1 2006 18.6252517 0.0560
74 2 2006 18.7002674 -0.1151
75 3 2006 23.4882950 -0.0562
76 4 2006 19.5652754 0.1862
77 5 2006 13.6857590 0.0105
78 6 2006 11.1265448 0.1504
79 7 2006 11.0227038 0.3490
80 8 2006 7.6550637 0.5267
81 9 2006 1.8708287 0.8089
82 10 2006 5.4129474 0.9479
83 11 2006 15.2249795 0.7625
84 12 2006 14.1703917 0.3941
85 1 2007 22.8691932 0.4027
86 2 2007 14.3317829 0.3353
87 3 2007 13.0766968 0.2792
88 4 2007 23.2335964 0.2960
89 5 2007 12.2474487 0.4899
90 6 2007 11.3357840 0.2445
91 7 2007 9.3112835 0.3629
92 8 2007 1.6431677 0.5396
93 9 2007 6.8483575 0.6252
94 10 2007 13.1529464 0.4540
95 11 2007 14.5120639 0.2489
96 12 2007 18.7909553 0.0054
97 1 2008 17.6493626 0.3037
98 2 2008 13.3828248 0.1166
99 3 2008 19.0525589 0.2730
100 4 2008 17.3262806 0.0467
101 5 2008 5.2345009 0.4020
102 6 2008 3.3166248 0.4263
103 7 2008 10.1094016 0.5558
104 8 2008 11.7260394 0.4236
105 9 2008 10.7470926 0.4762
106 10 2008 15.1591557 0.4127
107 11 2008 25.5558213 0.1474
108 12 2008 18.2455474 0.1755
109 1 2009 14.5430396 0.2185
110 2 2009 12.8569048 0.3521
111 3 2009 24.0707291 0.2680
112 4 2009 16.0374562 0.3234
113 5 2009 7.2387844 0.4757
114 6 2009 13.8021737 0.3078
115 7 2009 7.5232972 0.1179
116 8 2009 6.3403470 0.1999
117 9 2009 4.6583259 0.2814
118 10 2009 13.0958008 0.3646
119 11 2009 15.3329710 0.1914
120 12 2009 19.0394328 0.3836
121 1 2010 15.5080624 0.4732
122 2 2010 17.1551742 0.2134
123 3 2010 23.9729014 0.6320
124 4 2010 18.2537667 0.5644
125 5 2010 18.2236111 0.1881
126 6 2010 14.6082169 0.0680
127 7 2010 13.6161669 0.3111
128 8 2010 11.1220502 0.2472
129 9 2010 20.7870152 0.1259
130 10 2010 19.5371441 -0.0529
131 11 2010 24.8837296 -0.2133
132 12 2010 15.5016128 0.0233
133 1 2011 17.3435867 0.3739
134 2 2011 17.6096564 0.4228
135 3 2011 19.0682983 0.5413
136 4 2011 20.4890214 0.3569
137 5 2011 12.0540450 0.1313
138 6 2011 12.5896783 0.2642
139 7 2011 5.0990195 0.5356
140 8 2011 6.5726707 0.6490
141 9 2011 2.5099801 0.5884
142 10 2011 17.6380271 0.7376
143 11 2011 17.5128524 0.6004
144 12 2011 17.2655727 0.0990
145 1 2012 16.6883193 0.2272
146 2 2012 20.8374663 0.1049
147 3 2012 16.7002994 0.1991
148 4 2012 18.7962762 -0.0596
149 5 2012 16.9292646 -0.1165
150 6 2012 11.6490343 0.2207
151 7 2012 6.2529993 0.8586
152 8 2012 5.8991525 0.9473
153 9 2012 7.8485667 0.8419
154 10 2012 12.5817328 0.4928
155 11 2012 24.7770055 0.1684
156 12 2012 23.2486559 0.4899
In doing this, he works with R because it has the package for analysing the SARIMAX model. And so far, he's been doing it good with arimax() function of TSA package with seasonal ARIMA order (1,0,1).
So here I attach his syntax:
#Importing data
data=read.csv("C:/DATA.csv", header=TRUE)
rainfall=data$RAINFALL
exo=data$IOD
#Creating the suitable model of data that is able to be read by R with ts() function
library(forecast)
rainfall_ts=ts(rainfall, start=c(2000, 1), end=c(2012, 12), frequency = 12)
exo_ts=ts(exo, start=c(2000, 1), end=c(2012, 12), frequency = 12)
#Fitting SARIMAX model with seasonal ARIMA order (1,0,1) & estimation method is MLE (or ML)
library(TSA)
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
Below is the result:
> model_ts
Call:
arimax(x = log(rainfall_ts), order = c(1, 0, 1), seasonal = list(order = c(1,
0, 1), period = 12), xreg = exo_ts, method = "ML")
Coefficients:
ar1 ma1 sar1 sma1 intercept xreg
0.5730 -0.4342 0.9996 -0.9764 2.6757 -0.4894
s.e. 0.2348 0.2545 0.0018 0.0508 0.1334 0.1489
sigma^2 estimated as 0.1521: log likelihood = -86.49, aic = 184.99
Although he claimed the syntax is working, but his lecturer expected more.
Theoretically, because he used MLE, he has proven that the first derivatives of the log-likelihood function give implicit solutions. It means that the estimation process couldn't be done analytically with MLE so we need
to continue our working with the numerical method to get it done.
So this is the expectation of my friend's lecturer. He expected him that he can at least convince him that the estimation is truly need to be done numerically
and if so, he might be able to show him what method that is used by R (the numerical method such as Newton-Raphson, BFGS, BHHH, etc).
But the thing here is the arimax() function doesn't give us the choice on numerical method to choose if the estimation need to be executed numerically like below:
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
The 'method' above is for the estimation method and the available method are ML, CSS, and CSS-ML. It is clear that the sintax above doesn't consist of the numerical method and this is the matter.
So is there any possible way to know what numerical method used by R? Or my friend just got to construct his own program without depending to arimax() function?
If there are any errors in my code, please let me know. I also apologize for any grammatical or vocabulary mistakes. English is not my native language.
Some suggestions:
Estimate the model with each of the methods: ML, CSS, CSS-ML. Do the parameter estimates agree?
You can view the source code of the arimax() function by typing arimax, View(arimax) or getAnywhere(arimax) in the console.
Or you can do a debug by placing a debug bullet before the line model_ts=arimax(...) and then sourcing or debugSource()-ing your script. You can then step into the arimax function and see/verify yourself which optimization method arimax uses.

Creating a vector with multiple sequences based on number of IDs' repetitions

I've got a data frame with panel-data, subjects' characteristic through the time. I need create a column with a sequence from 1 to the maximum number of year per every subject. For example, if subject 1 is in the data frame from 2000 to 2005, I need the following sequence: 1,2,3,4,5,6.
Below is a small fraction of my data. The last column (exp) is what I trying to get. Additionally, if you have a look at the first subject (13) you'll see that in 2008 the value of qtty is zero. In this case I need just a NA or a code (0,1, -9999), it doesn't matter which one.
Below the data is what I did to get that vector, but it didn't work.
Any help will be much appreciated.
subject season qtty exp
13 2000 29 1
13 2001 29 2
13 2002 29 3
13 2003 29 4
13 2004 29 5
13 2005 27 6
13 2006 27 7
13 2007 27 8
13 2008 0 NA
28 2000 18 1
28 2001 18 2
28 2002 18 3
28 2003 18 4
28 2004 18 5
28 2005 18 6
28 2006 18 7
28 2007 18 8
28 2008 18 9
28 2009 20 10
28 2010 20 11
28 2011 20 12
28 2012 20 13
35 2000 21 1
35 2001 21 2
35 2002 21 3
35 2003 21 4
35 2004 21 5
35 2005 21 6
35 2006 21 7
35 2007 21 8
35 2008 21 9
35 2009 14 10
35 2010 11 11
35 2011 11 12
35 2012 10 13
My code:
numbY<-aggregate(season ~ subject, data = toCountY,length)
colnames(numbY)<-c("subject","inFish")
toCountY$inFish<-numbY$inFish[match(toCountY$subject,numbY$subject)]
numbYbyFisher<-unique(numbY)
seqY<-aggregate(numbYbyFisher$inFish, by=list(numbYbyFisher$subject), function(x)seq(1,x,1))
I am using ddply and I distinguish 2 cases:
Either you generate a sequence along subjet and you replace by NA where you have qtty is zero
ddply(dat,.(subjet),transform,new.exp=ifelse(qtty==0,NA,seq_along(subjet)))
Or you generate a sequence along qtty different of zero with a jump where you have qtty is zero
ddply(dat,.(subjet),transform,new.exp={
hh <- seq_along(which(qtty !=0))
if(length(which(qtty ==0))>0)
hh <- append(hh,NA,which(qtty==0)-1)
hh
})
EDITED
ind=qtty!=0
exp=numeric(length(subject))
temp=0
for(i in 1:length(unique(subject[ind]))){
temp[i]=list(seq(from=1,to=table(subject[ind])[i]))
}
exp[ind]=unlist(temp)
this will provide what you need

How to make a bar graph in ggplot that groups months of different years

My dataframe, df:
df
EffYr EffMo count dts
2 2012 1 1 2012-01-01
3 2012 2 3 2012-02-01
4 2012 3 1 2012-03-01
5 2012 5 1 2012-05-01
6 2012 6 1 2012-06-01
7 2012 7 2 2012-07-01
8 2012 8 11 2012-08-01
9 2012 9 84 2012-09-01
10 2012 10 184 2012-10-01
11 2012 11 165 2012-11-01
12 2012 12 246 2012-12-01
13 2013 1 414 2013-01-01
14 2013 2 130 2013-02-01
15 2013 3 182 2013-03-01
16 2013 4 261 2013-04-01
17 2013 5 229 2013-05-01
18 2013 6 249 2013-06-01
19 2013 7 330 2013-07-01
20 2013 8 135 2013-08-01
Each row of df represents a "month-year", the earliest being Jan 2012 and the latest being Aug 2013. I want to plot a bar graph (using ggplot2) where each bar represents a row of df with the bar height equal to the row's count. So, I should have 24 bars in total.
I want my x axis to be divided into 12 intervals: Jan-Dec, and bars that represent the same calendar month should lie in the same "month interval". For example, if df has a row for Jan 2011, Jan 2012, Jan 2013, then the Jan portion of my graph should have 3 bars so that I can compare my business's performance in the month of January for subsequent years.
Thanks
Edit: I want something that looks like
ggplot(diamonds, aes(cut, fill=cut)) + geom_bar() +
facet_grid(. ~ clarity)
But broken down by month. I tried to modify that code to fit my data, but never could get it right.
#Ben you're asking a number of ggplot2 questions. I would recommend you sit down with some good ggplot2 resources and try the example to become more skilled. Here are 2 excellent resources I use often:
http://docs.ggplot2.org/current/
http://www.cookbook-r.com/Graphs/
Now the solution I think you're after:
## dat <- read.table(text=" EffYr EffMo count dts
## 2 2012 1 1 2012-01-01
## 3 2012 2 3 2012-02-01
## 4 2012 3 1 2012-03-01
## 5 2012 5 1 2012-05-01
## 6 2012 6 1 2012-06-01
## 7 2012 7 2 2012-07-01
## 8 2012 8 11 2012-08-01
## 9 2012 9 84 2012-09-01
## 10 2012 10 184 2012-10-01
## 11 2012 11 165 2012-11-01
## 12 2012 12 246 2012-12-01
## 13 2013 1 414 2013-01-01
## 14 2013 2 130 2013-02-01
## 15 2013 3 182 2013-03-01
## 16 2013 4 261 2013-04-01
## 17 2013 5 229 2013-05-01
## 18 2013 6 249 2013-06-01
## 19 2013 7 330 2013-07-01
## 20 2013 8 135 2013-08-01", header=TRUE)
dat$month <- factor(month.name[dat$EffMo], levels = month.name)
dat$year <- as.factor(dat$EffYr)
ggplot(dat, aes(month, fill=year)) + geom_bar(aes(weight=count), position="dodge")

Barplot beside issue

I have managed to aggregate some data into the following:
Month Year Number
1 1 2011 3885
2 2 2011 3713
3 3 2011 6189
4 4 2011 3812
5 5 2011 916
6 6 2011 3813
7 7 2011 1324
8 8 2011 1905
9 9 2011 5078
10 10 2011 1587
11 11 2011 3739
12 12 2011 3560
13 1 2012 1790
14 2 2012 1489
15 3 2012 1907
16 4 2012 1615
I am trying to create a barplot where the bars for the months are next to each other, so for the above example January through April will have two bars (one for 2011 and one for 2012) and the remaining months will only have one bar representing 2011.
I know I have to use beside=T, but I guess I need to create some sort of matrix in order to get the barplot to display properly. I am having an issue figuring out what that step is. I have a feeling it may involve matrix but for some reason I am completely stumped to what seems like a very simple solution.
Also, I have this data: y=c('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec') which I would like to use in my names.arg. When I try to use it with the above data it tells me undefined columns selected which I am taking to mean that I need 16 variables in y. How can I fix this?
To use barplot you need to rearrange your data:
dat <- read.table(text = " Month Year Number
1 1 2011 3885
2 2 2011 3713
3 3 2011 6189
4 4 2011 3812
5 5 2011 916
6 6 2011 3813
7 7 2011 1324
8 8 2011 1905
9 9 2011 5078
10 10 2011 1587
11 11 2011 3739
12 12 2011 3560
13 1 2012 1790
14 2 2012 1489
15 3 2012 1907
16 4 2012 1615",sep = "",header = TRUE)
y <- c('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec')
barplot(rbind(dat$Number[1:12],c(dat$Number[13:16],rep(NA,8))),
beside = TRUE,names.arg = y)
Or you can use ggplot2 with the data pretty much as is:
dat$Year <- factor(dat$Year)
dat$Month <- factor(dat$Month)
ggplot(dat,aes(x = Month,y = Number,fill = Year)) +
geom_bar(position = "dodge") +
scale_x_discrete(labels = y)

Resources