How to create a matrix in simple correspondence analysis? - r

I am trying to create a matrix in order to apply a simple correspondence analysis on it; I have 2 categorical variables: exp and conexinternet with 3 levels each.
obs conexinternet exp
1 1 2
2 1 1
3 2 2
4 1 1
5 1 1
6 2 1
7 1 2
8 1 2
9 1 2
10 2 1
11 1 1
12 2 1
13 2 2
14 2 1
15 1 1
16 2 2
17 1 1
18 2 2
19 2 2
20 2 2
21 2 2
22 1 1
23 2 3
24 1 1
25 2 1
26 2 1
27 1 1
28 2 2
29 2 1
30 1 2
31 1 2
32 2 3
33 2 1
34 2 1
35 2 1
36 3 2
37 2 1
38 3 2
39 2 3
40 2 3
41 2 2
42 2 3
43 2 2
44 2 2
45 2 1
46 2 2
47 2 3
48 1 3
49 2 3
50 3 2
51 2 2
52 2 2
53 2 1
54 1 2
55 1 1
56 2 3
57 3 2
58 3 1
59 3 1
60 1 2
61 2 3
62 2 2
63 3 1
64 3 2
65 3 2
66 1 2
67 3 2
68 3 2
69 3 3
70 2 1
71 3 3
72 3 2
73 3 2
74 3 2
75 3 1
76 3 2
77 3 1
I want to make a vector to categorize the observations as 11, 12, 13, 21, 22, 23, 31, 32, 33, how can I do it?

Is this what you want?
d <- read.table(text="obs conexinternet exp
1 1 2
...
77 3 1", header=T)
(tab <- xtabs(~conexinternet+exp, d))
# exp
# conexinternet 1 2 3
# 1 10 9 1
# 2 14 15 9
# 3 5 12 2

Related

How to get p values for odds ratios from an ordinal regression in r

I am trying to get the p values for my odds ratio from an ordinal regression using r.
I previously constructed my p values on the log odds like this
scm <- polr(finaloutcome ~ Size_no + Hegemony + Committee, data = data3, Hess = TRUE)
(ctable <- coef(summary(scm)))
Calculate and store p value
p <- pnorm(abs(ctable[, "t value"]), lower.tail = FALSE) * 2
## combined table
(ctable <- cbind(ctable, "p value" = p))
I created by odds ratios like this:
ci <- confint.default(scm)
exp(coef(scm))
## OR and CI
exp(cbind(OR = coef(scm), ci))
However, I am now unsure how to create the p values for the odds ratio. Using the previous method I got:
(ctable1 <- exp(coef(scm)))
p1 <- pnorm(abs(ctable1[, "t value"]), lower.tail = FALSE) * 2
(ctable <- cbind(ctable, "p value" = p1))
However i get the error: Error in ctable1[, "t value"] : incorrect number of dimensions
Odds ratio output sample:
Size
Hegem
Committee
9.992240e-01
6.957805e-02
1.204437e-01
Data sample:
finaloutcome
Size_no
Committee
Hegemony
1
3
54
2
0
2
2
127
3
0
3
2
127
3
0
4
2
22
1
1
5
2
193
4
1
6
2
54
2
0
7
NA
11
1
1
8
3
54
2
0
9
3
22
1
1
10
2
53
3
1
11
2
53
3
1
12
2
53
3
1
13
2
53
3
1
14
2
53
3
1
15
2
53
3
1
16
2
120
3
0
17
2
120
3
0
18
1
22
1
1
19
1
22
1
1
20
2
193
4
1
21
2
193
4
1
22
2
193
4
1
23
2
12
4
1
24
2
35
1
1
25
1
193
4
1
26
1
164
4
1
27
1
12
4
1
28
2
12
4
1
29
2
193
4
1
30
2
54
2
0
31
2
193
4
1
32
2
193
4
1
33
2
54
2
0
34
2
12
4
1
35
2
22
1
1
36
4
53
3
1
37
2
35
1
1
38
1
193
4
1
39
5
54
2
0
40
7
164
4
1
41
5
54
2
0
42
1
12
4
1
43
7
193
4
1
44
2
193
4
1
45
2
193
4
1
46
2
193
4
1
47
2
193
4
1
48
2
193
4
1
49
2
12
4
1
50
2
22
1
1
51
2
12
4
1
52
2
12
4
1
53
6
13
1
1
54
6
13
1
1
55
6
13
1
1
56
6
12
4
1
57
2
193
4
1
58
3
12
4
1
59
1
12
4
1
60
1
12
4
1
61
8
35
1
1
62
2
193
4
1
63
8
35
1
1
64
6
30
2
1
65
8
12
4
1
66
4
12
4
1
67
5
30
2
1
68
5
54
2
0
69
7
12
4
1
70
5
12
4
1
71
5
54
2
0
72
5
193
4
1
73
5
193
4
1
74
5
54
2
0
75
5
54
2
0
76
1
11
1
1
77
3
22
1
1
78
3
12
4
1
79
6
12
4
1
80
2
22
1
1
81
8
193
4
1
82
8
193
4
1
83
4
193
4
1
84
2
193
4
1
85
2
193
4
1
86
2
193
4
1
87
2
193
4
1
88
2
193
4
1
89
2
193
4
1
90
2
193
4
1
91
2
193
4
1
92
2
193
4
1
93
8
193
4
1
94
6
12
4
1
95
5
12
4
1
96
5
12
4
1
97
5
12
4
1
98
5
12
4
1
99
5
12
4
1
100
5
12
4
1
I usually use lm or glm to create my model (mdl <- lm(…) or mdl <- glm(…)). Then I use summary on the object to see these values. More than this, you can use the Yardstick and Broom. I recommend the book R for Data Science. There is a great explanation about modeling and using the Tidymodels packages.
I went through the same difficulty.
I finally used the fonction tidy from the broom package: https://broom.tidymodels.org/reference/tidy.polr.html
library(broom)
tidy(scm, p.values = TRUE)
This does not yet work if you have categorical variables with more than two levels, or missing values.

Add rows to dataframe in R based on values in column

I have a dataframe with 2 columns: time and day. there are 3 days and for each day, time runs from 1 to 12. I want to add new rows for each day with times: -2, 1 and 0. How do I do this?
I have tried using add_row and specifying the row number to add to, but this changes each time a new row is added making the process tedious. Thanks in advance
picture of the dataframe
We could use add_row
then slice the desired sequence
and bind all to a dataframe:
library(tibble)
library(dplyr)
df1 <- df %>%
add_row(time = -2:0, Day = c(1,1,1), .before = 1) %>%
slice(1:15)
df2 <- bind_rows(df1, df1, df1) %>%
mutate(Day = rep(row_number(), each=15, length.out = n()))
Output:
# A tibble: 45 x 2
time Day
<dbl> <int>
1 -2 1
2 -1 1
3 0 1
4 1 1
5 2 1
6 3 1
7 4 1
8 5 1
9 6 1
10 7 1
11 8 1
12 9 1
13 10 1
14 11 1
15 12 1
16 -2 2
17 -1 2
18 0 2
19 1 2
20 2 2
21 3 2
22 4 2
23 5 2
24 6 2
25 7 2
26 8 2
27 9 2
28 10 2
29 11 2
30 12 2
31 -2 3
32 -1 3
33 0 3
34 1 3
35 2 3
36 3 3
37 4 3
38 5 3
39 6 3
40 7 3
41 8 3
42 9 3
43 10 3
44 11 3
45 12 3
Here's a fast way to create the desired dataframe from scratch using expand.grid(), rather than adding individual rows:
df <- expand.grid(-2:12,1:3)
colnames(df) <- c("time","day")
Results:
df
time day
1 -2 1
2 -1 1
3 0 1
4 1 1
5 2 1
6 3 1
7 4 1
8 5 1
9 6 1
10 7 1
11 8 1
12 9 1
13 10 1
14 11 1
15 12 1
16 -2 2
17 -1 2
18 0 2
19 1 2
20 2 2
21 3 2
22 4 2
23 5 2
24 6 2
25 7 2
26 8 2
27 9 2
28 10 2
29 11 2
30 12 2
31 -2 3
32 -1 3
33 0 3
34 1 3
35 2 3
36 3 3
37 4 3
38 5 3
39 6 3
40 7 3
41 8 3
42 9 3
43 10 3
44 11 3
45 12 3
You can use tidyr::crossing
library(dplyr)
library(tidyr)
add_values <- c(-2, 1, 0)
crossing(time = add_values, Day = unique(day$Day)) %>%
bind_rows(day) %>%
arrange(Day, time)
# A tibble: 45 x 2
# time Day
# <dbl> <int>
# 1 -2 1
# 2 0 1
# 3 1 1
# 4 1 1
# 5 2 1
# 6 3 1
# 7 4 1
# 8 5 1
# 9 6 1
#10 7 1
# … with 35 more rows
If you meant -2, -1 and 0 you can also use complete.
tidyr::complete(day, Day, time = -2:0)

add values of one group into another group in R

I have a question on how to add the value from a group to rest of the elements in the group then delete that row. for ex:
df <- data.frame(Year=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2),
Cluster=c("a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","c","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","d"),
Seed=c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,99,99,99,99,99,99),
Day=c(1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1),
value=c(5,2,1,2,8,6,7,9,3,5,2,1,2,8,6,55,66,77,88,99,10))
in the above example, my data is grouped by Year, Cluster, Seed and Day where seed=99 values need to be added to above rows based on (Year, Cluster and Day) group then delete this row. for ex: Row # 16, is part of (Year=1, Cluster=a,Day=1 and Seed=99) group and the value of Row #16 which is 55 should be added to Row #1 (5+55), Row # 6 (6+55) and Row # 11 (2+55) and row # 16 should be deleted. But when it comes to Row #21, which is in cluster=C with seed=99, should remain in the database as is as it cannot find any matching in year+cluster+day combination.
My actual data is of 1 million records with 10 years, 80 clusters, 500 days and 10+1 (1 to 10 and 99) seeds, so looking for so looking for an efficient solution.
Year Cluster Seed Day value
1 1 a 1 1 60
2 1 a 1 2 68
3 1 a 1 3 78
4 1 a 1 4 90
5 1 a 1 5 107
6 1 a 2 1 61
7 1 a 2 2 73
8 1 a 2 3 86
9 1 a 2 4 91
10 1 a 2 5 104
11 1 a 3 1 57
12 1 a 3 2 67
13 1 a 3 3 79
14 1 a 3 4 96
15 1 a 3 5 105
16 1 c 99 1 10
17 2 b 1 1 60
18 2 b 1 2 68
19 2 b 1 3 78
20 2 b 1 4 90
21 2 b 1 5 107
22 2 b 2 1 61
23 2 b 2 2 73
24 2 b 2 3 86
25 2 b 2 4 91
26 2 b 2 5 104
27 2 b 3 1 57
28 2 b 3 2 67
29 2 b 3 3 79
30 2 b 3 4 96
31 2 b 3 5 105
32 2 d 99 1 10
A data.table approach:
library(data.table)
df <- setDT(df)[, `:=` (value = ifelse(Seed != 99, value + value[Seed == 99], value),
flag = Seed == 99 & .N == 1), by = .(Year, Cluster, Day)][!(Seed == 99 & flag == FALSE),][, "flag" := NULL]
Output:
df[]
Year Cluster Seed Day value
1: 1 a 1 1 60
2: 1 a 1 2 68
3: 1 a 1 3 78
4: 1 a 1 4 90
5: 1 a 1 5 107
6: 1 a 2 1 61
7: 1 a 2 2 73
8: 1 a 2 3 86
9: 1 a 2 4 91
10: 1 a 2 5 104
11: 1 a 3 1 57
12: 1 a 3 2 67
13: 1 a 3 3 79
14: 1 a 3 4 96
15: 1 a 3 5 105
16: 1 c 99 1 10
17: 2 b 1 1 60
18: 2 b 1 2 68
19: 2 b 1 3 78
20: 2 b 1 4 90
21: 2 b 1 5 107
22: 2 b 2 1 61
23: 2 b 2 2 73
24: 2 b 2 3 86
25: 2 b 2 4 91
26: 2 b 2 5 104
27: 2 b 3 1 57
28: 2 b 3 2 67
29: 2 b 3 3 79
30: 2 b 3 4 96
31: 2 b 3 5 105
32: 2 d 99 1 10
Here's an approach using the tidyverse. If you're looking for speed with a million rows, a data.table solution will probably perform better.
library(tidyverse)
df <- data.frame(Year=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2),
Cluster=c("a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","c","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","d"),
Seed=c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,99,99,99,99,99,99),
Day=c(1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1),
value=c(5,2,1,2,8,6,7,9,3,5,2,1,2,8,6,55,66,77,88,99,10))
seeds <- df %>%
filter(Seed == 99)
matches <- df %>%
filter(Seed != 99) %>%
inner_join(select(seeds, -Seed), by = c("Year", "Cluster", "Day")) %>%
mutate(value = value.x + value.y) %>%
select(Year, Cluster, Seed, Day, value)
no_matches <- anti_join(seeds, matches, by = c("Year", "Cluster", "Day"))
bind_rows(matches, no_matches) %>%
arrange(Year, Cluster, Seed, Day)
#> Year Cluster Seed Day value
#> 1 1 a 1 1 60
#> 2 1 a 1 2 68
#> 3 1 a 1 3 78
#> 4 1 a 1 4 90
#> 5 1 a 1 5 107
#> 6 1 a 2 1 61
#> 7 1 a 2 2 73
#> 8 1 a 2 3 86
#> 9 1 a 2 4 91
#> 10 1 a 2 5 104
#> 11 1 a 3 1 57
#> 12 1 a 3 2 67
#> 13 1 a 3 3 79
#> 14 1 a 3 4 96
#> 15 1 a 3 5 105
#> 16 1 c 99 1 10
#> 17 2 b 1 1 60
#> 18 2 b 1 2 68
#> 19 2 b 1 3 78
#> 20 2 b 1 4 90
#> 21 2 b 1 5 107
#> 22 2 b 2 1 61
#> 23 2 b 2 2 73
#> 24 2 b 2 3 86
#> 25 2 b 2 4 91
#> 26 2 b 2 5 104
#> 27 2 b 3 1 57
#> 28 2 b 3 2 67
#> 29 2 b 3 3 79
#> 30 2 b 3 4 96
#> 31 2 b 3 5 105
#> 32 2 d 99 1 10
Created on 2018-11-23 by the reprex package (v0.2.1)

All arguments must be the same length using svm

I a trying to apply SVM on my data in order to predict future data.
So I have faced the following error:
All arguments must be the same length
> svmmodele1<-svm(data$note ~ AppCache+TCP+DNS,data=data,scale = FALSE,kernel="linear",cost= 0.08,gamma=0.06)
> svm.video.pred1<-predict(svmmodele1,data)
> svm.video.pred1
1 3 4 5 6 7 10 11 12 13 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Levels: 1 2 3 4 5
> svm.video.table1<-table(pred=svm.video.pred1, true= data$note)
Error in table(pred = svm.video.pred1, true = data$note) :
All arguments must be the same length
data$note
[1] 2 2 2 3 3 3 2 2 2 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 4 4 4 3 3 3 4 4 4 4 4 4 5 5
[39] 5 5 5 5 5 5 5 3 3 3 1 1 1 1 1 1
Levels: 1 2 3 4 5
For who are stuck on the same problem, the reason of that error is that I have some negative variable.

Vuong test has different results on R and Stata

I am running a zero inflated negative binomial model with probit link on R (http://www.ats.ucla.edu/stat/r/dae/zinbreg.htm) and Stata (http://www.ats.ucla.edu/stat/stata/dae/zinb.htm).
There is a Vuong test to compare whether this specification is better than an ordinary negative binomial model. Where R tells me I am better off using the latter, Stata says a ZINB is the preferable choice. In both instances I assume that the process leading to the excess zeros is the same as for the negative binomial distributed non-zero observations. Coefficients are indeed the same (except that Stata prints one digit more).
In R I run (data code is below)
require(pscl)
ZINB <- zeroinfl(Two.Year ~ length + numAuth + numAck,
data=Master,
dist="negbin", link="probit"
)
NB <- glm.nb(Two.Year ~ length + numAuth + numAck,
data=Master
)
Comparing both with vuong(ZINB, NB) from the same package yields
Vuong Non-Nested Hypothesis Test-Statistic: -10.78337
(test-statistic is asymptotically distributed N(0,1) under the
null that the models are indistinguishible)
in this case:
model2 > model1, with p-value < 2.22e-16
Hence: NB is better than ZINB.
In Stata I run
zinb twoyear numauth length numack, inflate(numauth length numack) probit vuong
and receive (iteration fitting suppressed)
Zero-inflated negative binomial regression Number of obs = 714
Nonzero obs = 433
Zero obs = 281
Inflation model = probit LR chi2(3) = 74.19
Log likelihood = -1484.763 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
twoyear | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
twoyear |
numauth | .1463257 .0667629 2.19 0.028 .0154729 .2771785
length | .038699 .006077 6.37 0.000 .0267883 .0506097
numack | .0333765 .010802 3.09 0.002 .0122049 .0545481
_cons | -.4588568 .2068824 -2.22 0.027 -.8643389 -.0533747
-------------+----------------------------------------------------------------
inflate |
numauth | .2670777 .1141893 2.34 0.019 .0432708 .4908846
length | .0147993 .0105611 1.40 0.161 -.0059001 .0354987
numack | .0177504 .0150118 1.18 0.237 -.0116722 .0471729
_cons | -2.057536 .5499852 -3.74 0.000 -3.135487 -.9795845
-------------+----------------------------------------------------------------
/lnalpha | .0871077 .1608448 0.54 0.588 -.2281424 .4023577
-------------+----------------------------------------------------------------
alpha | 1.091014 .175484 .7960109 1.495346
------------------------------------------------------------------------------
Vuong test of zinb vs. standard negative binomial: z = 2.36 Pr>z = 0.0092
In the very last line Stata tells me that in this case ZINB is better than NB: Both test statistic and p-value differ. How come?
Data (R code)
Master <- <-read.table(text="
Two.Year numAuth length numAck
0 1 4 6
3 3 28 3
3 1 18 4
0 1 42 4
0 2 17 0
2 1 10 3
1 2 20 0
0 1 28 3
1 1 23 7
0 2 34 3
2 2 24 2
0 2 18 0
0 1 23 7
0 1 35 11
4 2 33 13
0 2 24 4
0 2 21 9
1 4 21 0
1 1 8 6
2 1 18 1
0 3 28 2
0 2 17 2
1 1 30 6
4 2 28 16
1 4 35 1
2 3 19 2
0 1 24 2
1 3 26 6
1 1 17 7
0 3 42 4
0 3 32 8
3 1 33 23
7 2 24 9
0 2 25 6
1 1 7 1
0 1 15 2
2 2 16 2
0 1 23 6
2 3 18 7
0 1 28 5
0 1 12 2
1 1 25 4
0 4 18 1
1 2 32 6
1 1 15 2
2 2 14 4
0 2 24 9
0 3 30 9
0 2 19 9
0 2 14 2
2 2 23 3
0 2 18 0
1 3 13 4
0 1 10 4
0 1 24 8
0 2 22 9
2 3 29 5
2 1 25 5
0 2 17 4
1 2 24 0
0 2 26 0
2 2 33 12
1 4 17 2
1 1 25 8
3 1 36 11
0 1 10 4
9 2 60 22
0 2 18 3
2 3 19 6
2 2 23 7
2 2 26 0
1 1 20 5
4 2 31 4
0 2 21 2
0 1 24 12
1 1 12 1
1 3 26 5
1 4 32 8
2 3 21 1
3 3 26 3
4 2 36 6
3 3 28 2
1 3 27 1
0 2 12 5
0 3 24 4
0 2 35 1
0 2 17 2
3 2 28 3
0 3 29 8
0 2 20 3
3 2 28 0
11 1 30 2
0 3 22 2
21 3 59 24
0 2 15 5
0 2 22 2
5 4 33 0
0 2 21 2
4 2 21 0
0 3 25 9
2 2 31 5
1 2 23 1
2 3 25 0
0 1 13 3
0 1 22 7
0 1 16 3
6 1 18 4
2 2 19 7
3 2 22 10
0 1 12 6
0 1 23 8
1 1 23 9
1 2 32 15
1 3 26 8
1 3 15 2
0 3 16 2
0 4 29 2
2 3 24 3
2 3 32 1
2 1 29 13
1 3 26 0
5 1 23 4
3 2 21 2
4 2 19 4
4 3 19 2
2 1 29 0
0 1 13 6
0 2 28 2
0 3 33 1
0 1 20 2
0 1 30 8
1 2 19 2
17 2 30 7
5 3 39 17
21 3 30 5
1 3 29 24
1 1 31 4
4 3 26 13
4 2 14 16
2 3 31 14
5 3 37 10
15 2 52 13
1 1 6 5
2 1 24 13
17 3 17 3
3 2 29 5
2 1 26 7
3 3 34 9
5 2 39 2
3 1 26 7
1 2 32 12
2 3 26 4
9 3 28 8
1 3 29 1
4 1 24 7
9 1 40 13
1 2 27 21
2 2 27 13
5 3 31 10
10 2 29 15
10 2 41 15
8 1 24 17
2 4 16 5
17 2 26 20
3 2 31 3
2 2 18 1
6 3 32 9
2 1 32 11
4 3 34 8
4 1 16 1
5 1 33 5
0 2 17 11
17 2 48 8
2 1 11 2
5 3 33 18
4 2 25 9
10 2 17 5
1 1 25 8
3 3 41 16
2 1 40 13
4 3 25 2
16 4 32 13
10 1 33 18
5 2 25 3
3 2 20 3
2 3 14 7
3 2 23 4
2 2 28 4
3 2 25 19
0 2 14 6
3 1 28 18
8 3 27 11
1 3 25 17
21 2 33 15
9 2 24 2
1 1 16 14
1 1 38 10
16 2 37 13
16 2 41 1
7 2 24 18
4 2 17 5
4 1 37 32
3 1 37 8
13 2 35 6
15 1 23 11
7 1 47 11
3 1 16 6
12 2 36 6
7 1 24 17
4 2 24 8
14 2 24 9
15 2 24 11
0 3 19 4
0 4 28 9
1 1 5 3
11 1 28 15
5 1 33 5
10 2 21 9
3 3 28 8
2 3 13 2
11 2 41 8
4 2 24 11
3 1 32 11
4 2 31 11
7 2 34 3
11 6 33 6
7 3 33 7
2 2 37 13
7 3 19 9
1 2 14 3
6 2 15 11
11 3 37 12
0 2 20 5
7 4 13 6
17 1 52 14
9 3 47 30
1 2 32 27
30 3 36 19
2 2 12 5
3 1 30 7
4 2 19 11
32 3 45 14
13 1 17 7
16 2 24 4
5 1 32 13
7 3 29 14
5 2 46 2
1 2 21 6
1 3 13 17
11 1 41 16
6 2 33 1
7 1 31 20
0 1 16 13
6 3 26 8
11 2 46 7
8 2 20 5
8 1 44 7
2 2 33 12
1 3 22 5
0 4 14 2
4 1 25 8
5 3 24 11
1 1 21 18
5 1 28 5
2 1 51 19
2 1 16 4
17 2 35 2
4 1 35 1
9 3 48 8
2 1 33 16
0 3 24 7
18 2 33 12
11 1 41 5
5 2 17 3
8 1 19 7
4 3 38 2
23 2 27 10
22 3 46 13
5 3 21 1
5 2 38 10
1 2 20 5
2 2 24 8
0 3 30 9
7 2 44 16
7 1 21 7
0 1 20 10
10 2 33 11
4 2 18 2
11 1 45 17
7 2 32 7
7 2 28 6
5 2 25 10
3 2 57 6
8 1 16 2
7 2 34 4
5 2 22 8
2 2 21 7
4 2 37 15
2 4 36 7
1 1 17 4
0 2 23 9
12 2 48 4
8 3 29 13
0 1 29 7
0 2 27 12
1 1 53 10
3 3 15 5
8 1 40 29
2 2 22 11
10 2 20 7
4 4 27 3
4 1 24 4
2 2 24 5
1 2 19 6
10 3 41 10
57 3 46 9
5 1 20 11
6 2 30 4
0 2 20 5
16 3 35 8
1 2 44 1
2 4 24 8
1 1 20 9
5 3 19 11
5 3 29 15
3 1 21 8
3 3 19 3
8 3 44 0
11 3 34 15
2 2 31 1
11 1 39 11
0 3 24 3
4 2 35 6
2 1 14 6
10 1 30 10
6 2 21 4
9 2 32 3
0 1 34 10
6 2 32 3
7 2 50 11
11 1 35 15
4 1 27 9
1 2 32 27
8 2 54 2
0 3 15 8
2 1 31 13
0 1 31 11
0 4 14 5
0 2 37 15
0 2 51 12
0 2 34 1
0 3 29 12
0 2 22 11
0 2 19 15
0 2 39 13
0 3 25 12
0 1 46 2
0 4 42 10
0 1 38 5
0 3 31 4
0 3 33 1
0 2 24 11
0 1 28 16
0 2 28 13
0 1 29 17
0 1 23 13
0 3 36 21
0 2 30 15
0 2 25 12
0 2 26 17
0 3 19 2
0 2 37 5
0 2 47 12
0 1 21 20
0 3 27 21
0 2 16 7
0 1 35 5
0 2 32 24
0 3 31 6
0 3 36 13
0 2 26 20
0 1 31 13
0 2 46 6
0 2 34 12
0 1 18 13
0 1 29 3
0 3 40 9
0 1 25 3
0 3 45 9
0 2 31 3
0 2 35 4
0 3 29 10
0 2 33 13
0 3 22 4
0 2 26 9
0 2 29 19
0 2 28 12
0 2 30 5
0 4 30 3
0 3 32 14
0 3 45 20
0 2 42 9
0 2 25 4
0 2 20 22
0 3 31 5
0 1 26 13
0 2 32 11
0 1 31 2
0 2 42 17
0 1 37 8
0 3 37 16
0 3 25 10
0 2 33 11
0 2 29 7
0 2 21 16
0 3 30 33
0 1 35 8
0 3 25 6
0 2 54 3
0 2 41 10
0 3 35 1
0 4 26 4
0 2 31 4
0 3 26 11
0 3 34 11
0 2 27 7
0 1 19 14
0 1 38 9
0 2 24 1
0 3 30 20
0 4 43 13
0 2 20 10
0 2 38 1
0 2 41 6
0 1 20 9
0 2 34 2
0 2 24 5
0 2 24 2
0 1 31 19
0 3 49 7
0 1 26 0
0 2 44 6
0 3 36 13
0 3 31 14
0 2 30 20
0 1 27 13
0 2 28 9
0 2 22 20
0 4 36 34
0 3 25 3
0 2 29 17
0 2 40 8
0 2 39 17
0 4 29 8
0 1 27 22
0 1 21 10
0 3 17 5
0 3 28 10
0 1 27 7
0 3 40 7
0 2 21 4
0 1 33 14
0 1 31 14
0 3 37 13
0 2 23 9
0 2 25 1
0 2 30 1
0 2 30 12
0 1 41 8
0 2 26 1
0 2 25 14
0 2 26 3
0 3 36 1
0 4 23 1
0 2 18 0
0 2 34 2
0 1 39 6
0 1 16 15
0 3 34 4
0 4 35 6
0 1 22 10
0 1 35 8
0 2 36 13
0 2 50 8
0 2 28 6
0 1 30 14
0 2 33 26
0 3 28 1
0 1 18 10
0 2 27 4
0 2 27 5
0 2 8 2
0 4 32 16
0 3 40 6
0 4 45 15
0 2 38 3
0 2 29 6
0 1 25 9
12 1 27 5
2 1 33 8
4 3 31 3
1 1 33 4
0 3 20 5
0 2 28 6
2 2 32 12
0 3 30 2
0 3 19 3
1 1 14 19
0 2 28 2
0 3 26 3
0 2 32 13
1 3 21 7
1 4 20 0
2 2 40 8
0 2 35 18
1 1 20 6
6 2 21 3
3 2 33 10
1 1 31 15
1 2 22 5
0 2 24 7
2 2 22 3
3 2 17 6
9 2 30 12
2 4 39 9
0 2 46 8
0 2 26 5
1 2 28 5
6 1 18 3
5 2 19 13
1 3 27 3
1 1 20 10
0 1 27 6
0 4 26 1
0 2 19 4
0 1 26 8
1 1 30 8
0 2 22 2
3 3 42 4
3 1 10 5
3 1 30 12
1 1 25 8
1 2 38 8
2 1 28 13
3 1 18 12
2 2 20 11
2 2 29 0
1 2 18 3
1 1 6 2
0 1 6 3
2 2 24 1
0 1 14 1
1 1 17 5
2 2 20 9
1 4 24 0
1 2 8 10
0 2 18 1
1 1 25 5
2 2 12 7
0 3 18 1
0 1 19 1
8 2 21 2
1 2 23 5
7 2 19 6
1 1 21 5
0 1 16 6
1 1 24 1
0 2 19 3
1 2 14 6
3 2 24 2
6 1 32 21
0 1 16 0
1 2 15 0
1 2 8 8
0 1 14 5
0 2 27 5
2 2 17 2
1 1 19 7
1 2 21 2
0 1 29 7
0 2 18 2
0 2 15 6
2 3 27 3
0 2 57 4
2 3 17 2
1 1 18 8
1 1 17 5
0 1 18 1
1 2 18 4
1 1 12 1
0 2 15 6
1 2 24 4
3 2 14 9
0 1 24 6
3 1 30 9
0 1 19 5
3 1 16 7
5 3 21 1
2 2 17 5
4 1 34 9
1 1 17 7
3 2 30 10
12 1 17 6
2 1 26 6
1 1 18 2
2 2 24 0
0 1 12 2
0 2 3 2
1 1 11 4
1 4 18 13
0 1 25 9
8 2 20 7
0 1 11 7
7 3 26 19
6 1 18 6
6 2 32 5
1 1 31 2
1 2 33 9
4 1 17 6
1 2 34 11
5 1 37 3
0 3 27 10
12 2 25 14
3 1 40 6
6 2 27 9
0 2 31 2
1 1 28 7
2 1 37 11
1 1 19 0
5 2 30 17
4 3 40 6
0 1 27 6
5 3 31 7
0 3 26 10
3 2 32 4
1 3 43 6
3 1 19 3
2 2 37 4
0 3 28 4
6 3 30 11
1 1 30 9
4 3 31 26
1 2 14 1
10 1 35 27
1 1 36 7
5 1 32 8
2 1 28 6
3 1 34 16
3 2 32 5
1 3 11 0
2 2 42 5
0 2 30 7
0 1 32 9
3 3 43 2
7 2 43 6
1 2 21 5
2 1 27 20
1 2 37 7
2 1 37 8
0 1 19 3
0 3 28 5
2 2 33 3
3 1 41 6
13 2 41 9
2 1 38 3
4 1 32 5
2 1 34 8
1 1 27 9
8 1 29 7
4 1 17 6
0 1 20 8
1 2 34 4
1 1 16 11
4 2 33 5
0 2 15 6
1 1 27 4
2 3 15 8
1 1 30 8
3 2 41 20
0 1 25 15
1 3 35 24
4 2 30 21
6 2 30 6
16 2 33 21
2 3 37 3
2 2 30 12
4 1 57 11
0 2 18 16
4 4 20 13
3 1 43 10
3 1 25 15
7 2 31 11
2 1 31 3
5 2 40 11
3 2 28 7
4 2 27 10
0 1 26 6
4 2 24 14
4 2 23 8
0 2 25 11
21 2 33 12
1 3 37 0
3 2 28 7
4 2 27 10
1 2 41 15
2 2 30 16
2 2 28 7
6 1 19 8
4 4 22 19
0 2 38 33
1 1 29 11
1 2 27 2
4 2 24 6
2 1 22 5
",header=TRUE,sep="")
The above problem occurred with pcsl version 1.4.6. I spoke to the author since and in version 1.4.7 he fixed the bug. The actual version in February 2015 is 1.4.8.

Resources