Evaluate exam in moodle with a custom evaluation scheme - r

I'm trying to implement a simple evaluation scheme using exams, but none of the options seems to do what I'd like:
There are 5 answer options. I want to give 0.2 points for all marked correct answers and all unmarked incorrect answers, and zero points for all unmarked good answers and marked incorrect answers. Therefore, a task can yield 0, .2, .4, .6, .8, or 1 points.
I'm aware that this evaluation scheme may have some shortcomings, but I'm counterbalancing those in other ways.
I was able to implement this when I did scanned exams, because I could use a string distance function to tell how many characters differ in two strings that coded the answers and the solutions. But I want to do this in moodle now, so I cannot control the evaluation.
Here are some examples that I tried:
ee <- exams_eval(partial = TRUE, rule = "all", negative = FALSE)
ee$pointsum("01111", "10000") # should be 0 and returns 0
ee$pointsum("01111", "10001") # should be .2 but returns 0
ee$pointsum("11111", "10001") # should be .4 and returns .4
ee$pointsum("00000", "11001") # should be .4 but returns 0
ee$pointsum("11011", "00011") # should be .6 but returns .5
ee$pointsum("11111", "10101") # should be .6 and returns .6
ee$pointsum("11001", "10001") # Should be .8 but returns .66
ee$pointsum("00000", "00001") # should be .8 but returns 0
ee$pointsum("11001", "11001") # Should be 1 and returns 1
ee$pointsum("00000", "00000") # Should be 1 but returns 0
The previous examples yield the same result when using rule = "false" or rule = "false2", or rule = "true". When using rule = "none", this is the only change:
ee$pointsum("01111", "10001") # should be .2 but returns 0.25
Is there a way to implement the above mentioned evaluation scheme in moodle?

The R/exams package currently does not support the desired evaluation scheme because Moodle does not support it. Looking at the Moodle docs at https://docs.moodle.org/36/en/Moodle_XML_format#Multiple_choice shows that you can see that partial credit schemes always work in the following way:
Not marking/clicking an <answer> does not yield any points.
Marking/clicking an answer option yields a certain fraction of the overall points.
Hence, R/exams handles this by assigning the fraction 1/#correct to marking/clicking a correct answer. The rule argument only controls which fraction is subtracted when marking/clicking an incorrect answer. The default is the "false2" rule that essentially subtracts 1/#incorrect. For example, an item with 2 correct and 3 incorrect answers is processed with:
ee2 <- exams_eval(partial = TRUE, rule = "false2", negative = FALSE)
ee2$pointvec("11000")
## pos neg
## 0.5000000 -0.3333333
When you use rule = "all" then 100% of the points are removed if an incorrect answer is marked/clicked:
ee <- exams_eval(partial = TRUE, rule = "all", negative = FALSE)
ee$pointvec("11000")
## pos neg
## 0.5 -1.0
There are learning management systems that support more flexible ways of computing the points (e.g., in QTI this is in principle possible) but I don't think your particular scheme can be implemented in Moodle. (If anyone knows more than the Moodle docs above, let me know!)
(You said you are aware of the drawbacks of your evaluation scheme - which is, of course, fair enough. However, just for the record in case anybody else reads this: I'm personally not very fond of the scheme you are proposing. Even if on average each answer option is correct with 50% probability, students can obtain 50% of the points on average by either always clicking all of the answer options or by always clicking none of the answer options. This can even get higher if the probability for each options deviates from 50%. Hence, this sets strange incentives for some students...at least the business and economics students I'm typically teaching.)

Strangely, there is no combination of exams_eval parameters that would give the desired scheme. One can verify that by checking all 20 combinations. For instance,
combs <- expand.grid(partial = c(TRUE, FALSE), negative = c(TRUE, FALSE),
rule = c("false2", "false", "true", "all", "none"),
stringsAsFactors = FALSE)
sapply(1:nrow(combs), function(n) {
ee <- exams_eval(partial = combs[n, 1], negative = combs[n, 2], rule = combs[n, 3])
ee$pointsum("11001", "10001")
})
# [1] 0.6666667 -1.0000000 0.6666667 0.0000000 0.6666667 -1.0000000 0.6666667 0.0000000
# [9] 0.6666667 -1.0000000 0.6666667 0.0000000 0.6666667 -1.0000000 0.6666667 0.0000000
# [17] 0.6666667 -1.0000000 0.6666667 0.0000000
That's particularly strange because the scheme is quite simple - counting matches, which we may implement as follows:
pointsum <- function(correct, answer) {
correct <- as.numeric(strsplit(correct, "")[[1]])
answer <- as.numeric(strsplit(answer, "")[[1]])
mean(correct == answer)
}
pointsum("01111", "10000") # should be 0
# [1] 0
pointsum("01111", "10001") # should be .2
# [1] 0.2
pointsum("11111", "10001") # should be .4
# [1] 0.4
pointsum("00000", "11001") # should be .4
# [1] 0.4
pointsum("11011", "00011") # should be .6
# [1] 0.6
pointsum("11111", "10101") # should be .6
# [1] 0.6
pointsum("11001", "10001") # should be .8
# [1] 0.8
pointsum("00000", "00001") # should be .8
# [1] 0.8
pointsum("11001", "11001") # should be 1
# [1] 1
pointsum("00000", "00000") # should be 1
# [1] 1

Related

split() returns "longer object length is not a multiple of shorter object length"

Context
I asked this question recently:
Comparing partitions from split() using a nested for loop containing an if statement
where I needed to compare partitions generated by split() from a distance matrix using the code fix provided by #robertdj
set.seed(1234) # set random seed for reproducibility
# generate random normal variates
x <- rnorm(5)
y <- rnorm(5)
df <- data.frame(x, y) # merge vectors into dataframe
d <- dist(x) # generate distance matrix
splt <- split(d, 1:5) # split data with 5 values in each partition
for (i in 1:length(splt)) {
for (j in 1:length(splt)) {
if (i != j) {
a <- length(which(splt[[i]] >= min(splt[[j]]))) / length(splt[[i]])
b <- length(which(splt[[j]] <= max(splt[[i]]))) / length(splt[[j]])
}
}
}
I generated a MWE where each split contained the same number of elements. I did this just for illustrative purposes, fully knowing that this would not necessarily hold for real data.
As per #Robert Hacken's comment if I instead do
d <- na.omit(d[lower.tri(d)])
I get partitions of unequal length.
Real Data
However my real data does not have the "same size" property. My real data contains many more partitions than only 5 in my MWE.
Here is my code
splt <- split(dist_matrix, sub("(?:(.*)\\|){2}(\\w+)\\|(\\w+)\\|.*?$", "\\1-\\2", colnames(dist_matrix)))
The distance matrix dist_matrix contains FASTA headers from which I extract the species names.
I then use splt above in the doubly nested loop.
For instance, splt[[4]] contains 5 values, whereas splt[[10]] contains 9.
splt[[4]]
[1] 0.1316667 0.1383333 0.1166667 0.1333333 0.1216667
splt[[10]]
[1] 0.1450000 0.1483333 0.1316667 0.1316667 0.1333333 0.1333333 0.1166667 0.1166667 0.1200000
Expected Output
For my real problem, each partition corresponds to distances for a single species to all other unique species. So, if Species X has two DNA sequences representing it and there are 10 species in total, the partition for Species X should contain 20 distances. However I don't want the partition to include the distance between the two sequences for species A.
splt would thus contain 10 partitions (each not necessarily of the same length) for all species
The expected output of a and b is a number between 0-1 inclusive. I think these numbers should be small in my real example, but they are large when I try to run my code, which I think is a consequence of the warning().
What I've Done
I've read on SO that %in% is typically used to resolve the warning
In splt[[i]] == splt[[j]] :
longer object length is not a multiple of shorter object length
except in my case, I believe I would need %notin% <- Negate(%in%).
However, %notin% gives the error in my original post
the condition has length > 1
Question
How can my nested loop be altered to remove the warning?
I'm going to go out on a limb by interpreting parts of what you say, discarding your code, and seeing what I can come up with. If nothing else, it may spark conversation to explain what about my interpretations are correct (and which are incorrect).
Starting with the splt as generated by the random data, then replacing elements 4 and 5 with longer vectors,
set.seed(1234)
x <- rnorm(5)
y <- rnorm(5)
df <- data.frame(x, y)
d <- dist(x)
splt <- split(d, 1:5)
splt[[4]] <- rnorm(4)
splt[[5]] <- rnorm(10)
We have:
splt <- list("1" = c(1.48449499149608, 2.62312694474001), "2" = c(2.29150692606848, 0.15169544670039), "3" = c(1.13863195324393, 3.43013887931241), "4" = c(-0.477192699753547, -0.998386444859704, -0.77625389463799, 0.0644588172762693), "5" = c(-0.693720246937475, -1.44820491038647, 0.574755720900728, -1.02365572296388, -0.0151383003641817, -0.935948601168394, 1.10229754620026, -0.475593078869057, -0.709440037512506, -0.501258060594761))
splt
# $`1`
# [1] 1.484495 2.623127
# $`2`
# [1] 2.2915069 0.1516954
# $`3`
# [1] 1.138632 3.430139
# $`4`
# [1] -0.47719270 -0.99838644 -0.77625389 0.06445882
# $`5`
# [1] -0.6937202 -1.4482049 0.5747557 -1.0236557 -0.0151383 -0.9359486 1.1022975 -0.4755931 -0.7094400 -0.5012581
You reference expressions like which(splt[[i]] >= min(splt[[j]])), which I'm interpreting to mean *"what is the ratio of splt[[i]] that is above the max value in splt[[j]]. Since we're comparing (for example) splt[[1]] with all of splt[[2]] through splt[[5]] here, and likewise for the others, we're going to have a square matrix where the diagonal is splt[[i]]-vs-splt[[i]] (likely not interesting).
Some quick math so we know what we should end up with:
splt[[1]]
# [1] 1.484495 2.623127
range(splt[[2]])
# [1] 0.1516954 2.2915069
Since 1 from [[1]] is greater than 2's max of 2.29, we expect 0.5 in a comparison between the two (for >= max(.)); similarly, none of [[1]] is below 0.15, so we expect a 0 there.
Similarly, [[5]] over [[4]]:
splt[[5]]
# [1] -0.6937202 -1.4482049 0.5747557 -1.0236557 -0.0151383 -0.9359486 1.1022975 -0.4755931 -0.7094400 -0.5012581
range(splt[[4]])
# [1] -0.99838644 0.06445882
### 2 of 10 are greater than the max
sum(splt[[5]] >= max(splt[[4]])) / length(splt[[5]])
# [1] 0.2
### 9 of 10 are lesser than the min
sum(splt[[5]] <= min(splt[[4]])) / length(splt[[5]])
# [1] 0.2
We can use outer, but sometimes that can be confusing, especially since in this case we'd need to Vectorize the anon-func passed to it. I'll adapt your double-for loop premise into nested sapply calls.
Greater than the other's max
sapply(splt, function(y) sapply(setNames(splt, paste0("max", seq_along(splt))), function(z) sum(y >= max(z)) / length(y)))
# 1 2 3 4 5
# max1 0.5 0.0 0.5 0.00 0.0
# max2 0.5 0.5 0.5 0.00 0.0
# max3 0.0 0.0 0.5 0.00 0.0
# max4 1.0 1.0 1.0 0.25 0.2
# max5 1.0 0.5 1.0 0.00 0.1
Interpretation and subset validation:
1 with max of 2: comparing [[1]] (first column) with the max value from [[2]] (second row), half of 1's values are greater, so we have 0.5 (as expected).
5 with max of 4: comparing [[5]] (fifth column) with the max value from [[4]] (fourth row), 0.2 meet the condition.
Less than the other's min
sapply(splt, function(y) sapply(setNames(splt, paste0("min", seq_along(splt))), function(z) sum(y <= min(z)) / length(y)))
# 1 2 3 4 5
# min1 0.5 0.5 0.5 1.00 1.0
# min2 0.0 0.5 0.0 1.00 0.8
# min3 0.0 0.5 0.5 1.00 1.0
# min4 0.0 0.0 0.0 0.25 0.2
# min5 0.0 0.0 0.0 0.00 0.1
Same two pairs:
1 with min of 2 (row 2, column 1) is 0, as expected
5 with min of 4 (row 4, column 5) is 0.2, as expected
Edit: #compbiostats pointed out that while sum(..) should produce the same results as length(which(..)), the latter may be more robust to missing-data (e.g., NA values, c.f., Difference between sum(), length(which()), and nrow() in R). For sum(..) to share that resilience, we should add na.rm=TRUE) to both sum(.) and min(.) in the above calls. Thanks #compbiostats!

Create multiple confusion matrices in R using loops

I am trying to create multiple confusion matrices from one dataframe, with each matrix generated based off a different condition in the dataframe.
So for the dataframe below, I want a confusion matrix for when Value = 1, Value = 2, Value =3
observed predicted Value
1 1 1
0 1 1
1 0 2
0 0 2
1 1 3
0 0 3
and see the results like:
Value Sensitivity Specificity PPV NPV
1 .96 .71 .84 .95
2 .89 .63 .30 .45
3 .88 .95 .28 .80
This is what I tried with a reproducible example. I am trying to write a loop that looks at every row, determines if Age = 1, and then pulls the values from the predicted and observed columns to generate a confusion matrix. Then I manually pull out the values from the confusion matrix to write out sen, spec, ppv, and npv and tried to combine all the matrices together. And then the loop starts again with Age = 2.
data(scat)
df<-scat %>% transmute(observed=ifelse(Site=="YOLA","case", "control"), predicted=ifelse(Location=="edge","case", "control"),Age)
x<-1 #evaluate at ages 1 through 5
for (i in dim(df)[1]) { #for every row in df
while(x<6) { #loop stops at Age=5
if(x=df$Age) {
q<-confusionMatrix(data = df$predicted, reference = df$observed, positive = "case")
sensitivity = q$table[1,1]/(q$table[1,1]+q$table[2,1])
specificity = q$table[2,2]/(q$table[2,2]+q$table[1,2])
ppv = q$table[1,1]/(q$table[1,1]+q$table[1,2])
npv = q$table[2,2]/(q$table[2,2]+q$table[2,1])
matrix(c(sensitivity, specificity, ppv, npv),ncol=4,byrow=TRUE)
}
}
x <- x + 1 #confusion matrix at next Age value
}
final<- rbind(matrix) #combine all the matrices together
However, this loop is completely non-functional. I'm not sure where the error is.
Your code can be simplified and the desired output achieved like this:
library(caret)
library(dplyr)
data(scat)
df <- scat %>%
transmute(observed = factor(ifelse(Site == "YOLA","case", "control")),
predicted = factor(ifelse(Location == "edge","case", "control")),
Age)
final <- t(sapply(sort(unique(df$Age)), function(i) {
q <- confusionMatrix(data = df$predicted[df$Age == i],
reference = df$observed[df$Age == i],
positive = "case")$table
c(sensitivity = q[1, 1] / (q[1, 1] + q[2, 1]),
specificity = q[2, 2] / (q[2, 2] + q[1, 2]),
ppv = q[1, 1] / (q[1, 1] + q[1, 2]),
npv = q[2, 2] / (q[2, 2] + q[2, 1]))
}))
Resulting in
final
#> sensitivity specificity ppv npv
#> [1,] 0.0 0.5625000 0.00000000 0.8181818
#> [2,] 0.0 1.0000000 NaN 0.8000000
#> [3,] 0.2 0.5882353 0.06666667 0.8333333
#> [4,] 0.0 0.6923077 0.00000000 0.6923077
#> [5,] 0.5 0.6400000 0.25000000 0.8421053
However, it's nice to know why your own code didn't work, so here are a few issues that might be useful to consider:
You need factor columns rather than character columns for confusionMatrix
You were incrementing through the rows of df, but you need one iteration for each unique age, not each row in your data frame.
Your line to increment x happens outside of the while loop, so x never increments and the loop never terminates, so the console just hangs.
You are doing if(x = df$Age), but you need a == to test equality.
It doesn't make sense to compare x to df$Age anyway, because x is length 1 and df$Age is a long vector.
You have unnecessary repetition by doing q$table each time. You can just make q equal to q$table to make your code more readable and less error-prone.
You call matrix at the end of the loop, but you don't store it anywhere, so the whole loop doesn't actually do anything.
You are trying to rbind an object called matrix in the last line which doesn't exist
Your lack of spaces between math operators, commas and variables make the code less readable and harder to debug. I'm not just saying this as a stylistic point; it is a major source of errors I see frequently here on SO.

R: What is wrong with rounding? [duplicate]

Yes I know why we always round to the nearest even number if we are in the exact middle (i.e. 2.5 becomes 2) of two numbers. But when I want to evaluate data for some people they don't want this behaviour. What is the simplest method to get this:
x <- seq(0.5,9.5,by=1)
round(x)
to be 1,2,3,...,10 and not 0,2,2,4,4,...,10.
Edit: To clearify: 1.4999 should be 1 after rounding. (I thought this would be obvious)
This is not my own function, and unfortunately, I can't find where I got it at the moment (originally found as an anonymous comment at the Statistically Significant blog), but it should help with what you need.
round2 = function(x, digits) {
posneg = sign(x)
z = abs(x)*10^digits
z = z + 0.5 + sqrt(.Machine$double.eps)
z = trunc(z)
z = z/10^digits
z*posneg
}
x is the object you want to round, and digits is the number of digits you are rounding to.
An Example
x = c(1.85, 1.54, 1.65, 1.85, 1.84)
round(x, 1)
# [1] 1.8 1.5 1.6 1.8 1.8
round2(x, 1)
# [1] 1.9 1.5 1.7 1.9 1.8
(Thanks #Gregor for the addition of + sqrt(.Machine$double.eps).)
If you want something that behaves exactly like round except for those xxx.5 values, try this:
x <- seq(0, 1, 0.1)
x
# [1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
floor(0.5 + x)
# [1] 0 0 0 0 0 1 1 1 1 1 1
As #CarlWitthoft said in the comments, this is the IEC 60559 standard as mentioned in ?round:
Note that for rounding off a 5, the IEC 60559 standard is expected to be used, ‘go to the even digit’. Therefore round(0.5) is 0 and round(-1.5) is -2. However, this is dependent on OS services and on representation error (since e.g. 0.15 is not represented exactly, the rounding rule applies to the represented number and not to the printed number, and so round(0.15, 1) could be either 0.1 or 0.2).
An additional explanation by Greg Snow:
The logic behind the round to even rule is that we are trying to
represent an underlying continuous value and if x comes from a truly
continuous distribution, then the probability that x==2.5 is 0 and the
2.5 was probably already rounded once from any values between 2.45 and 2.54999999999999..., if we use the round up on 0.5 rule that we learned in grade school, then the double rounding means that values
between 2.45 and 2.50 will all round to 3 (having been rounded first
to 2.5). This will tend to bias estimates upwards. To remove the
bias we need to either go back to before the rounding to 2.5 (which is
often impossible to impractical), or just round up half the time and
round down half the time (or better would be to round proportional to
how likely we are to see values below or above 2.5 rounded to 2.5, but
that will be close to 50/50 for most underlying distributions). The
stochastic approach would be to have the round function randomly
choose which way to round, but deterministic types are not
comforatable with that, so "round to even" was chosen (round to odd
should work about the same) as a consistent rule that rounds up and
down about 50/50.
If you are dealing with data where 2.5 is likely to represent an exact
value (money for example), then you may do better by multiplying all
values by 10 or 100 and working in integers, then converting back only
for the final printing. Note that 2.50000001 rounds to 3, so if you
keep more digits of accuracy until the final printing, then rounding
will go in the expected direction, or you can add 0.000000001 (or
other small number) to your values just before rounding, but that can
bias your estimates upwards.
This appears to work:
rnd <- function(x) trunc(x+sign(x)*0.5)
Ananda Mahto's response seems to do this and more - I am not sure what the extra code in his response is accounting for; or, in other words, I can't figure out how to break the rnd() function defined above.
Example:
seq(-2, 2, by=0.5)
# [1] -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0
round(x)
# [1] -2 -2 -1 0 0 0 1 2 2
rnd(x)
# [1] -2 -2 -1 -1 0 1 1 2 2
Depending on how comfortable you are with jiggling your data, this works:
round(x+10*.Machine$double.eps)
# [1] 1 2 3 4 5 6 7 8 9 10
This method:
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(z)
z = z/10^n
z*posneg
}
does not seem to work well when we have numbers with many digits. E.g. doing round2(2436.845, 2) will give us 2436.84. The issue seems to occur with the trunc(z) function.
Overall, I think it has something to do with the way R stores numbers and thus the trunc and float function doesn't always work. I was able to get around it in not the most elegant way:
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(as.numeric(as.character(z)))
z = z/10^n
(z)*posneg
}
This mimics the rounding away from zero at .5:
round_2 <- function(x, digits = 0) {
x = x + abs(x) * sign(x) * .Machine$double.eps
round(x, digits = digits)
}
round_2(.5 + -2:4)
-2 -1 1 2 3 4 5

findAssocs() of tm packages suppresses associations

I am using findAssocs() of the tm package on a document frequency matrix to identify words which are associated with particular term(s) across various documents in a corpus.
My problem is that I get different output when giving a vector of terms as input to the function compared to giving a single term as input.
Here is my example.
library(tm)
txt <- c("alpha bravo", "alpha charlie", "alpha charlie", "zulu")
corp <- Corpus(VectorSource(txt))
dtm <- DocumentTermMatrix(corp)
Returns the following dtm
> as.matrix(dtm)
Terms
Docs alpha bravo charlie zulu
1 1 1 0 0
2 1 0 1 0
3 1 0 1 0
4 0 0 0 1
If I would want to identify all terms associated with "alpha" I get the following output (as intended):
> findAssocs(dtm, "alpha", 0.00)
$alpha
charlie bravo
0.58 0.33
I could do the same for "bravo" and get the following output (as intended):
> findAssocs(dtm, "bravo", 0.00)
$bravo
alpha
0.33
As I would like to find those associations for a number of terms I have passed a vector to findAssocs in order to get the required output. However, if I pass a vector of terms (chr) to the function the output is different from the one I get for single inputs:
> findAssocs(dtm, c("alpha","bravo"), 0.00)
$alpha
charlie
0.58
$bravo
numeric(0)
Actually, the assocation between "alpha"and "bravo" is omitted which is not the behavior I would have expected here. The function seems to treat the individual terms independently of each other and thus does not analyze the correlation between "alpha" and "bravo" if they are both passed to the function in a vector.
Can anyone explain that behavior and tell me how to omitt it? As a workaround I could apply the function for each single term but that is not really handy...
UPDATE
What I find odd is that the correlation between "alpha" and "bravo" is not omitted if we plot the associations, e.g. through the following code:
> freqTerm <- findFreqTerms(dtm, 1)
> freqTerm
[1] "alpha" "bravo" "charlie" "zulu"
plot(dtm, term=freqTerm, corThreshold=0.0, weighting=T, attrs=list(node=list(fixedsize=FALSE, shape="ellipse")))
How is plot(dtm, term=freqTerm ... different from "findAssocs()"?
tm::findAssocs() omits direct comparisons for exactly the reasons stated in the comment by #Steven Beauport. Given that you are searching for a small set of terms likely to be highly correlated, this seems more like a bug than a feature. This is illustrated by the example of this function (see ?tm::findAssocs) where the terms oil and opec are the most similar, but this is masked by the omission of each from the other's association vector.
An alternative is to use the equivalent feature from the quanteda package:
library(quanteda)
txt <- c("alpha bravo", "alpha charlie", "alpha charlie", "zulu")
corp <- corpus(txt)
dtm <- dfm(corp, verbose = FALSE)
# this also works fine if you want to go straight from text:
# dtm <- dfm(txt, verbose = FALSE)
(simlist <- similarity(dtm, c("alpha","bravo"), margin = "features"))
## similarity Matrix:
## $alpha
## charlie bravo zulu
## 0.5774 0.3333 -1.0000
##
## $bravo
## alpha zulu charlie
## 0.3333 -0.3333 -0.5774
Or if you prefer it as a matrix:
as.matrix(simlist)
## alpha bravo
## alpha 1.0000000 0.3333333
## charlie 0.5773503 -0.5773503
## bravo 0.3333333 1.0000000
## zulu -1.0000000 -0.3333333
similarity() can do cosine similarities as well as other similarities defined in the proxy package, but the (Pearson's) correlation and cosine methods are currently implemented in fully sparse computation, whereas the others are not (yet). By defining margin = "documents", furthermore, you can compare documents instead of terms, for instance for clustering.

Find the maximum value in decision tree

I created decision tree with Party package in R.
I'm trying to get the route/branch with the maximum value.
It can be mean value that came from box-plot
and it can be probability value that came from binary tree
(source: rdatamining.com)
This can be done pretty easily actually, though while your definition of maximum value is clear for a regression tree, it is not very clear for a classification tree, as in each node different level can have it's own maximum
Either way, here's a pretty simple helper function that will return you the predictions for each type of tree
GetPredicts <- function(ct){
f <- function(ct, i) nodes(ct, i)[[1]]$prediction
Terminals <- unique(where(ct))
Predictions <- sapply(Terminals, f, ct = ct)
if(is.matrix(Predictions)){
colnames(Predictions) <- Terminals
return(Predictions)
} else {
return(setNames(Predictions, Terminals))
}
}
Now luckily you've took your trees from the examples of ?ctree, so we can test them (next time, please provide the code you used yourself)
Regression Tree (your frist tree)
## load the package and create the tree
library(party)
airq <- subset(airquality, !is.na(Ozone))
airct <- ctree(Ozone ~ ., data = airq,
controls = ctree_control(maxsurrogate = 3))
plot(airct)
Now, test the function
res <- GetPredicts(airct)
res
# 5 3 6 9 8
# 18.47917 55.60000 31.14286 48.71429 81.63333
So we've got the predictions per each terminal node. You can easily proceed with which.max(res) from here (I'll leave it for you to decide)
Classification tree (your second tree)
irisct <- ctree(Species ~ .,data = iris)
plot(irisct, type = "simple")
Run the function
res <- GetPredicts(irisct)
res
# 2 5 6 7
# [1,] 1 0.00000000 0.0 0.00000000
# [2,] 0 0.97826087 0.5 0.02173913
# [3,] 0 0.02173913 0.5 0.97826087
Now, the output is a bit harder to read because each class has it's own probabilities. You could make this a bit more readable using
row.names(res) <- levels(iris$Species)
res
# 2 5 6 7
# setosa 1 0.00000000 0.0 0.00000000
# versicolor 0 0.97826087 0.5 0.02173913
# virginica 0 0.02173913 0.5 0.97826087
The, you could do something like the following in order to get the overall maximum value
which(res == max(res), arr.ind = TRUE)
# row col
# setosa 1 1
For column/row maxes, you could do
matrixStats::colMaxs(res)
# [1] 1.0000000 0.9782609 0.5000000 0.9782609
matrixStats::rowMaxs(res)
# [1] 1.0000000 0.9782609 0.9782609
But, again, I'll leave to you to decide on how to proceed from here.

Resources