How can I have a predictive Termination & Active model? - r

I am a newbie in R. I have a dataset. Year & Month Active is store in the network Enterprise. Termination is store that left the network Enterprise. With these two variables, I can calculate the turnover My turnover is Termination / ((Active + Termination)) / (nb jours in the month) Example : Janv. 2013 , Active = 593 , Termination = 100 , Turnover = 1,75%
My question is with my dataset in attachment how can I calculate the active number and the termination number until 12-2015 ?
Is it possible to have a view of the scenario?
Dataset:
Year Month Active Termination To (%)
2013 1 5936 100 1,75%
2013 2 6182 190 3,21%
2013 3 6501 117 1,91%
2013 4 6675 92 1,43%
2013 5 6749 111 1,67%
2013 6 6719 145 2,20%
2013 7 6814 121 1,83%
2013 8 6854 90 1,34%
2013 9 6972 99 1,45%
2013 10 7320 99 1,42%
2013 11 7606 98 1,33%
2013 12 7976 155 1,99%
2014 1 7934 87 1,11%
2014 2 8079 127 1,61%
2014 3 8198 125 1,56%
2014 4 8135 154 1,91%
2014 5 8113 136 1,70%
2014 6 8095 173 2,17%
2014 7 8131 220 2,76%
2014 8 7950 135 1,72%
2014 9 7978 108 1,38%
2014 10 8117 199 2,51%
2014 11 8269 117 1,45%
2014 12 8471 177 2,11%
2015 1 8472 132 1,59%
2015 2 8591 117 1,39%
2015 3 8691 161 1,90%
2015 4 8647 126 1,48%
2015 5 8623 123 1,45%
2015 6 8739 177 2,07%
2015 7 8740 218 2,55%
2015 8 8548 35 0,41%

Related

Importing Data in R

I want to import data into R but I am getting a few errors. I download my ".CSV" file to my computer and specify the file path like this setwd("C:/Users/intellipaat/Desktop/BLOG/files") and then I am writing read.data <- read.csv("file1.csv"), but the console returns an error like this.
"read.data<-read.csv(file1.csv)
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
'file1.csv' object not found
What should I do for this? I tried the internet link route, but again I encountered a problem.
I wrote like this:
install.packages("XML")
install.packages("RCurl")
to load the packages, run the following command:
library("XML")
library("RCurl")
url <- "https://en.wikipedia.org/wiki/Ease_of_doing_business_index#Ranking"
tabs <- getURL(url)
and the console wrote me this error;
Error in function (type, msg, asError = TRUE) :
error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
I would be glad if you help me in this regard...
The Ease of Doing Business rankings table on Wikipedia is an HTML table, not a comma separated values file.
Loading the HTML table into an R data frame can be handled in a relatively straightforward manner with the rvest package. Instead of downloading the HTML file we can read it directly into R with read_html(), and then use html_table() to extract the tabular data into a data frame.
library(rvest)
wiki_url <- "https://en.wikipedia.org/wiki/Ease_of_doing_business_index#Ranking"
aPage <- read_html(wiki_url)
aTable <- html_table(aPage)[[2]] # second node is table of rankings
head(aTable)
...and the first few rows of output:
> head(aTable)
Classification Jurisdiction 2020 2019 2018 2017 2016 2015 2014 2013 2012
1 Very Easy New Zealand 1 1 1 1 2 2 3 3 3
2 Very Easy Singapore 2 2 2 2 1 1 1 1 1
3 Very Easy Hong Kong 3 4 5 4 5 3 2 2 2
4 Very Easy Denmark 4 3 3 3 3 4 5 5 5
5 Very Easy South Korea 5 5 4 5 4 5 7 8 8
6 Very Easy United States 6 8 6 8 7 7 4 4 4
2011 2010 2009 2008 2007 2006
1 3 2 2 2 2 1
2 1 1 1 1 1 2
3 2 3 4 4 5 7
4 6 6 5 5 7 8
5 16 19 23 30 23 27
6 5 4 3 3 3 3
>
Next, we confirm that the last countries were read correctly: Libya, Yemen, Venezuela, Eritrea, and Somalia.
> tail(aTable,n=5)
Classification Jurisdiction 2020 2019 2018 2017 2016 2015 2014 2013 2012
186 Below Average Libya 186 186 185 188 188 188 187 N/A N/A
187 Below Average Yemen 187 187 186 179 170 137 133 118 99
188 Below Average Venezuela 188 188 188 187 186 182 181 180 177
189 Below Average Eritrea 189 189 189 189 189 189 184 182 180
190 Below Average Somalia 190 190 190 190 N/A N/A N/A N/A N/A
2011 2010 2009 2008 2007 2006
186 N/A N/A N/A N/A N/A N/A
187 105 99 98 113 98 90
188 172 177 174 172 164 120
189 180 175 173 171 170 137
190 N/A N/A N/A N/A N/A N/A
Finally, we use tidyr and dplyr to convert the data to narrow format tidy data for subsequent analysis.
library(dplyr)
library(tidyr)
aTable %>%
# convert years 2017 - 2020 to character because pivot_longer()
# requires all columns to be of same data type
mutate_at(3:6,as.character) %>%
pivot_longer(-c(Classification,Jurisdiction),
names_to="Year",values_to="Rank") %>%
# convert Rank and Year to numeric values (introducing NA values)
mutate_at(c("Rank","Year"),as.numeric) -> rankings
head(rankings)
...and the output:
> head(rankings)
# A tibble: 6 x 4
Classification Jurisdiction Year Rank
<chr> <chr> <dbl> <dbl>
1 Very Easy New Zealand 2020 1
2 Very Easy New Zealand 2019 1
3 Very Easy New Zealand 2018 1
4 Very Easy New Zealand 2017 1
5 Very Easy New Zealand 2016 2
6 Very Easy New Zealand 2015 2
>

Merging 2 files without a common key

I have 2 files. One is a time_file which has 3000 rows and the other is userid file which has 2000 rows. I want to merge the two, so that each row (ID) in the userid file is paired with the full data from each row of the time_file.
Rows 1-3000 would show the first userid with each of the dates.
Rows 3001-6000 would show the 2nd userid with each of the dates, and so on.
Thanks in advance!
Time file
mo day year date
11 1 2015 11/1/2015
11 2 2015 11/2/2015
11 3 2015 11/3/2015
11 4 2015 11/4/2015
11 5 2015 11/5/2015
.
.
userid file
userid
154
155
157
158
159
160
.
.
Ideal format(what I want)
mo day year date userid
11 1 2015 11/1/2015 154
11 2 2015 11/2/2015 154
11 3 2015 11/3/2015 154
11 4 2015 11/4/2015 154
11 5 2015 11/5/2015 154
.
.
3 28 2017 3/28/2017 154
3 29 2017 3/29/2017 154
3 30 2017 3/30/2017 154
3 31 2017 3/31/2017 154
11 1 2015 11/1/2015 155
11 2 2015 11/2/2015 155
11 3 2015 11/3/2015 155
11 4 2015 11/4/2015 155
11 5 2015 11/5/2015 155
11 6 2015 11/6/2015 155
Easiest solution in R I can think of, assuming you've gotten your time data in a data frame and your user data in a vector:
final_df <- cbind(date_df, "userid" = rep(user, each = 3000))
This will repeat each user_id 3000 times, then bind the user_id column to the date data frame.
In SPSS you can use the cartesian product function for this:
First this recreates your example data:
data list free/mo day year (3f4) date (a12).
begin data.
11 1 2015 11/1/2015
11 2 2015 11/2/2015
11 3 2015 11/3/2015
11 4 2015 11/4/2015
11 5 2015 11/5/2015
end data.
DATASET NAME time_file.
data list free/ userid.
begin data.
154,155,157,158,159,160
end data.
DATASET NAME userid.
This will now combine the two tables like you requested:
STATS CARTPROD VAR1=userid INPUT2=time_file VAR2=mo day year date
/SAVE OUTFILE="path\your combined data.sav".

vis.gam transparent at top of "persp" graph

I am running a GAMM using package mgcv. The model is running fine and gives an output that makes sense, but when I use vis.gam(plot.type="persp") my graph appears like this:
enter image description here
Why is this happening? When I use vis.gam(plot.type="contour") there is no area which is transparent.
It appears to not simply be a problem with the heat color pallete; the same thing happens when I change the color scheme of the "persp" plot:
persp plot, "topo" colour
The contour plot is completely filled while the persp plot is still transparent at the top.
Data:
logcpue assnage distkm fsamplingyr
1 -1.5218399 7 3.490 2015
2 -1.6863990 4 3.490 2012
3 -1.4534337 6 3.490 2014
4 -1.5207723 5 3.490 2013
5 -2.4061258 2 3.490 2010
6 -2.5427262 3 3.490 2011
7 -1.6177367 3 3.313 1998
8 -4.4067192 10 3.313 2005
9 -4.3438054 11 3.313 2006
10 -2.8834031 7 3.313 2002
11 -2.3182512 2 3.313 1997
12 -4.1108738 1 3.235 2010
13 -2.0149030 3 3.235 2012
14 -1.4900912 6 3.235 2015
15 -3.7954892 2 3.235 2011
16 -1.6499840 4 3.235 2013
17 -1.9924302 5 3.235 2014
18 -1.2122716 4 3.189 1998
19 -0.6675703 3 3.189 1997
20 -4.7957905 7 3.106 1998
21 -3.8763958 6 3.106 1997
22 -1.2205021 4 3.073 2010
23 -1.9262374 7 3.073 2013
24 -3.3463891 9 3.073 2015
25 -1.7805862 2 3.073 2008
26 -3.2451931 8 3.073 2014
27 -1.4441139 5 3.073 2011
28 -1.4395389 6 3.073 2012
29 -1.6357552 4 2.876 2014
30 -1.3449091 5 2.876 2015
31 -2.3782225 3 2.876 2013
32 -4.4886364 1 2.876 2011
33 -2.6026897 2 2.876 2012
34 -3.5765503 1 2.147 2002
35 -4.8040211 9 2.147 2010
36 -1.3993664 5 2.147 2006
37 -1.2712250 4 2.147 2005
38 -1.8495790 7 2.147 2008
39 -2.5073795 1 2.034 2012
40 -2.0654553 4 2.034 2015
41 -3.6309855 2 2.034 2013
42 -2.2643639 3 2.034 2014
43 -2.2643639 6 1.452 2006
44 -3.3900241 8 1.452 2008
45 -4.9628446 2 1.452 2002
46 -2.0088240 5 1.452 2005
47 -3.9186675 1 1.323 2013
48 -4.3438054 2 1.323 2014
49 -3.5695327 3 1.323 2015
50 -1.6986690 7 1.200 2005
51 -3.2451931 8 1.200 2006
52 -0.9024016 4 1.200 2002
library(mgcv)
f1 <- formula(logcpue ~ s(assnage)+distkm)
m1 <- gamm(f1,random = list(fsamplingyr =~ 1),
method = "REML",
data =ycsnew)
vis.gam(m1$gam,color="topo",plot.type = "persp",theta=180)
vis.gam(m1$gam,color="heat",plot.type = "persp",theta=180)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="heat",las=1)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="terrain",las=1,contour.col="black")
The code of vis.gam has this:
surf.col[surf.col > max.z * 2] <- NA
I am unable to understand what it is doing and it appears to be rather ad_hoc. NA values of colors are generally transparent. If you comment out that line (and assign the environment of the new function as:
environment(vis.gam2) <- environment(vis.gam)
.... you get complete coloring of the surface.

Numerical Method for SARIMAX Model using R

My friend is currently working on his assignment about estimation of parameter of a time series model, SARIMAX (Seasonal ARIMA Exogenous), with Maximum Likelihood Estimation (MLE) method. The data used by him is about the monthly rainfall from 2000 - 2012 with Indian Ocean Dipole (IOD) index as the exogenous variable.
Here is data:
MONTH YEAR RAINFALL IOD
1 1 2000 15.3720526 0.0624
2 2 2000 10.3440804 0.1784
3 3 2000 14.6116392 0.3135
4 4 2000 18.6842179 0.3495
5 5 2000 15.2937896 0.3374
6 6 2000 15.0233152 0.1946
7 7 2000 11.1803399 0.3948
8 8 2000 11.0589330 0.4391
9 9 2000 10.1488916 0.3020
10 10 2000 21.1187121 0.2373
11 11 2000 15.3980518 -0.0324
12 12 2000 18.9393770 -0.0148
13 1 2001 19.1075901 -0.2448
14 2 2001 14.9097284 0.1673
15 3 2001 19.2379833 0.1538
16 4 2001 19.6900990 0.3387
17 5 2001 8.0684571 0.3578
18 6 2001 14.0463518 0.3394
19 7 2001 5.9916609 0.1754
20 8 2001 8.4439327 0.0048
21 9 2001 11.8321596 0.1648
22 10 2001 24.3700636 -0.0653
23 11 2001 22.3584436 0.0291
24 12 2001 23.6114379 0.1731
25 1 2002 17.8409641 0.0404
26 2 2002 14.7377067 0.0914
27 3 2002 21.2226294 0.1766
28 4 2002 16.6403125 -0.1512
29 5 2002 10.8074049 -0.1072
30 6 2002 6.3796552 0.0244
31 7 2002 17.0704423 0.0542
32 8 2002 1.7606817 0.0898
33 9 2002 5.3665631 0.6736
34 10 2002 8.3246622 0.7780
35 11 2002 17.8044938 0.3616
36 12 2002 16.7062862 0.0673
37 1 2003 13.5572859 -0.0628
38 2 2003 17.1113997 0.2038
39 3 2003 14.9899967 0.1239
40 4 2003 14.0996454 0.0997
41 5 2003 11.4017542 0.0581
42 6 2003 6.7749539 0.3490
43 7 2003 7.1484264 0.4410
44 8 2003 10.3004854 0.4063
45 9 2003 10.6630202 0.3289
46 10 2003 20.6518764 0.1394
47 11 2003 20.8638443 0.1077
48 12 2003 20.5548048 0.4093
49 1 2004 16.0436903 0.2257
50 2 2004 17.2568827 0.2978
51 3 2004 20.2361063 0.2523
52 4 2004 11.6619038 0.1212
53 5 2004 12.8296532 -0.3395
54 6 2004 8.4202138 -0.1764
55 7 2004 15.5916644 0.0118
56 8 2004 0.9486833 0.1651
57 9 2004 7.2732386 0.2825
58 10 2004 18.0083314 0.3747
59 11 2004 14.4672043 0.1074
60 12 2004 17.3637554 0.0926
61 1 2005 18.9420168 0.0551
62 2 2005 17.0146995 -0.3716
63 3 2005 23.3002146 -0.2641
64 4 2005 17.8689675 0.2829
65 5 2005 17.2365890 0.1883
66 6 2005 14.0178458 0.0347
67 7 2005 12.6925175 -0.0680
68 8 2005 9.3861600 -0.0420
69 9 2005 11.7132404 -0.1425
70 10 2005 18.5768673 -0.0514
71 11 2005 19.6723156 -0.0008
72 12 2005 18.3248465 -0.0659
73 1 2006 18.6252517 0.0560
74 2 2006 18.7002674 -0.1151
75 3 2006 23.4882950 -0.0562
76 4 2006 19.5652754 0.1862
77 5 2006 13.6857590 0.0105
78 6 2006 11.1265448 0.1504
79 7 2006 11.0227038 0.3490
80 8 2006 7.6550637 0.5267
81 9 2006 1.8708287 0.8089
82 10 2006 5.4129474 0.9479
83 11 2006 15.2249795 0.7625
84 12 2006 14.1703917 0.3941
85 1 2007 22.8691932 0.4027
86 2 2007 14.3317829 0.3353
87 3 2007 13.0766968 0.2792
88 4 2007 23.2335964 0.2960
89 5 2007 12.2474487 0.4899
90 6 2007 11.3357840 0.2445
91 7 2007 9.3112835 0.3629
92 8 2007 1.6431677 0.5396
93 9 2007 6.8483575 0.6252
94 10 2007 13.1529464 0.4540
95 11 2007 14.5120639 0.2489
96 12 2007 18.7909553 0.0054
97 1 2008 17.6493626 0.3037
98 2 2008 13.3828248 0.1166
99 3 2008 19.0525589 0.2730
100 4 2008 17.3262806 0.0467
101 5 2008 5.2345009 0.4020
102 6 2008 3.3166248 0.4263
103 7 2008 10.1094016 0.5558
104 8 2008 11.7260394 0.4236
105 9 2008 10.7470926 0.4762
106 10 2008 15.1591557 0.4127
107 11 2008 25.5558213 0.1474
108 12 2008 18.2455474 0.1755
109 1 2009 14.5430396 0.2185
110 2 2009 12.8569048 0.3521
111 3 2009 24.0707291 0.2680
112 4 2009 16.0374562 0.3234
113 5 2009 7.2387844 0.4757
114 6 2009 13.8021737 0.3078
115 7 2009 7.5232972 0.1179
116 8 2009 6.3403470 0.1999
117 9 2009 4.6583259 0.2814
118 10 2009 13.0958008 0.3646
119 11 2009 15.3329710 0.1914
120 12 2009 19.0394328 0.3836
121 1 2010 15.5080624 0.4732
122 2 2010 17.1551742 0.2134
123 3 2010 23.9729014 0.6320
124 4 2010 18.2537667 0.5644
125 5 2010 18.2236111 0.1881
126 6 2010 14.6082169 0.0680
127 7 2010 13.6161669 0.3111
128 8 2010 11.1220502 0.2472
129 9 2010 20.7870152 0.1259
130 10 2010 19.5371441 -0.0529
131 11 2010 24.8837296 -0.2133
132 12 2010 15.5016128 0.0233
133 1 2011 17.3435867 0.3739
134 2 2011 17.6096564 0.4228
135 3 2011 19.0682983 0.5413
136 4 2011 20.4890214 0.3569
137 5 2011 12.0540450 0.1313
138 6 2011 12.5896783 0.2642
139 7 2011 5.0990195 0.5356
140 8 2011 6.5726707 0.6490
141 9 2011 2.5099801 0.5884
142 10 2011 17.6380271 0.7376
143 11 2011 17.5128524 0.6004
144 12 2011 17.2655727 0.0990
145 1 2012 16.6883193 0.2272
146 2 2012 20.8374663 0.1049
147 3 2012 16.7002994 0.1991
148 4 2012 18.7962762 -0.0596
149 5 2012 16.9292646 -0.1165
150 6 2012 11.6490343 0.2207
151 7 2012 6.2529993 0.8586
152 8 2012 5.8991525 0.9473
153 9 2012 7.8485667 0.8419
154 10 2012 12.5817328 0.4928
155 11 2012 24.7770055 0.1684
156 12 2012 23.2486559 0.4899
In doing this, he works with R because it has the package for analysing the SARIMAX model. And so far, he's been doing it good with arimax() function of TSA package with seasonal ARIMA order (1,0,1).
So here I attach his syntax:
#Importing data
data=read.csv("C:/DATA.csv", header=TRUE)
rainfall=data$RAINFALL
exo=data$IOD
#Creating the suitable model of data that is able to be read by R with ts() function
library(forecast)
rainfall_ts=ts(rainfall, start=c(2000, 1), end=c(2012, 12), frequency = 12)
exo_ts=ts(exo, start=c(2000, 1), end=c(2012, 12), frequency = 12)
#Fitting SARIMAX model with seasonal ARIMA order (1,0,1) & estimation method is MLE (or ML)
library(TSA)
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
Below is the result:
> model_ts
Call:
arimax(x = log(rainfall_ts), order = c(1, 0, 1), seasonal = list(order = c(1,
0, 1), period = 12), xreg = exo_ts, method = "ML")
Coefficients:
ar1 ma1 sar1 sma1 intercept xreg
0.5730 -0.4342 0.9996 -0.9764 2.6757 -0.4894
s.e. 0.2348 0.2545 0.0018 0.0508 0.1334 0.1489
sigma^2 estimated as 0.1521: log likelihood = -86.49, aic = 184.99
Although he claimed the syntax is working, but his lecturer expected more.
Theoretically, because he used MLE, he has proven that the first derivatives of the log-likelihood function give implicit solutions. It means that the estimation process couldn't be done analytically with MLE so we need
to continue our working with the numerical method to get it done.
So this is the expectation of my friend's lecturer. He expected him that he can at least convince him that the estimation is truly need to be done numerically
and if so, he might be able to show him what method that is used by R (the numerical method such as Newton-Raphson, BFGS, BHHH, etc).
But the thing here is the arimax() function doesn't give us the choice on numerical method to choose if the estimation need to be executed numerically like below:
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
The 'method' above is for the estimation method and the available method are ML, CSS, and CSS-ML. It is clear that the sintax above doesn't consist of the numerical method and this is the matter.
So is there any possible way to know what numerical method used by R? Or my friend just got to construct his own program without depending to arimax() function?
If there are any errors in my code, please let me know. I also apologize for any grammatical or vocabulary mistakes. English is not my native language.
Some suggestions:
Estimate the model with each of the methods: ML, CSS, CSS-ML. Do the parameter estimates agree?
You can view the source code of the arimax() function by typing arimax, View(arimax) or getAnywhere(arimax) in the console.
Or you can do a debug by placing a debug bullet before the line model_ts=arimax(...) and then sourcing or debugSource()-ing your script. You can then step into the arimax function and see/verify yourself which optimization method arimax uses.

diff operation within a group, after a dplyr::group_by()

Let's say I have this data.frame (with 3 variables)
ID Period Score
123 2013 146
123 2014 133
23 2013 150
456 2013 205
456 2014 219
456 2015 140
78 2012 192
78 2013 199
78 2014 133
78 2015 170
Using dplyr I can group them by ID and filter these ID that appear more than once
data <- data %>% group_by(ID) %>% filter(n() > 1)
Now, what I like to achieve is to add a column that is:
Difference = Score of Period P - Score of Period P-1
to get something like this:
ID Period Score Difference
123 2013 146
123 2014 133 -13
456 2013 205
456 2014 219 14
456 2015 140 -79
78 2012 192
78 2013 199 7
78 2014 133 -66
78 2015 170 37
It is rather trivial to do this in a spreadsheet, but I have no idea on how I can achieve this in R.
Thanks for any help or guidance.
Here is another solution using lag. Depending on the use case it might be more convenient than diff because the NAs clearly show that a particular value did not have predecessor whereas a 0 using diff might be the result of a) a missing predecessor or of b) the subtraction between two periods.
data %>% group_by(ID) %>% filter(n() > 1) %>%
mutate(
Difference = Score - lag(Score)
)
# ID Period Score Difference
# 1 123 2013 146 NA
# 2 123 2014 133 -13
# 3 456 2013 205 NA
# 4 456 2014 219 14
# 5 456 2015 140 -79
# 6 78 2012 192 NA
# 7 78 2013 199 7
# 8 78 2014 133 -66
# 9 78 2015 170 37

Resources