Getting statistics for nodes from a regression tree in the party pagckage - r

I am using the party package in R.
I would like to get various statistics (mean, median, etc) from various nodes of the resultant tree, but I cannot see how to do this. For example
airq <- subset(airquality, !is.na(Ozone))
airct <- ctree(Ozone ~ ., data = airq,
controls = ctree_control(maxsurrogate = 3))
airct
plot(airct)
results in a tree with 4 terminal nodes. How would I get the mean airquality for each of those nodes?

I can't get which variable of the node is the airquality. But I show you here how to customize your tree plot:
innerWeights <- function(node){
grid.circle(gp = gpar(fill = "White", col = 1))
mainlab <- node$psplit$variableName
label <- paste(mainlab,paste('prediction=',round(node$prediction,2) ,sep= ''),sep= '\n')
grid.text( label= label,gp = gpar(col='red'))
}
plot(airct, inner_panel = innerWeights)
Edit to get statistics by node
library(gridExtra)
innerWeights <- function(node){
dat <- round_any(node$criterion$statistic,0.01)
grid.table(t(dat))
}
plot(airct, inner_panel = innerWeights)

This is surprisingly harder than I thought. Try something like this:
a <- by(airq,where(airct),colMeans) #or whatever function you desire for colMeans
a
a$"3" #access at node three
a[["3"]] #same thing
You might find some other useful examples with ?`BinaryTree-class`.

How to get there if you are lost in R-space (and the documentation does not help you immediately)
First, try str(airct): The output is a bit lengthy, since the results are complex, but for easier cases, e.g. t-test, this is all you need.
Since print(airct) or simply airct gives quite useful info, how does print work? Try class(airct) or check the documentation: The result if of class BinaryTree.
Ok, we could have seen this from the docs, and in this case the information on the BinaryTree page is good enough (see the examples on that page.)
But assume the author was lazy: the try getAnywhere(print.BinaryTree). On the top you find y<-x#responses: So try airct#responses next

You can also do this using the dplyr package.
First get which node each observation belongs to and store it in the dataframe.
airq$node <- where(airct)
Then use group_by to group the observations by node, and use summarise to calculate the mean of the Ozone measurement. You can swap mean out for whatever summary statistic function you like.
airq %>% group_by(node) %>% summarise(avg=mean(Ozone))
Which gives the following results.
node avg
(int) (dbl)
1 3 55.60000
2 5 18.47917
3 6 31.14286
4 8 81.63333
5 9 48.71429

Related

Is there a way to add species to an ISOMAP plot in R?

I am using the isomap-function from vegan package in R to analyse community data of epiphytic mosses and lichens. I started analysing the data using NMDS but due to the structure of the data ran into problems which is why I switched to ISOMAP which works perfectly well and returns very nice results. So far so good... However, the output of the function does not support plotting of species within the ISOMAP plot as species scores are not available. Anyway, I would really like to add species information to enhance the interpretability of the output.
Does anyone of you has a solution or hint to this problem? Is there a way to add species kind of post hoc to the plot as it can be done with environmental data?
I would greatly appreciate any help on this topic!
Thank you and best regards,
Inga
No, there is no function to add species scores to isomap. It would look like this:
`sppscores<-.isomap` <-
function(object, value)
{
value <- scale(value, center = TRUE, scale = FALSE)
v <- crossprod(value, object$points)
attr(v, "data") <- deparse(substitute(value))
object$species <- v
object
}
Or alternatively:
`sppscores<-.isomap` <-
function(object, value)
{
wa <- vegan::wascores(object$points, value, expand = TRUE)
attr(wa, "data") <- deparse(substitute(value))
object$species <- wa
object
}
If ord is your isomap result and comm are your community data, you can use these as:
sppscores(ord) <- comm # either alternative
I have no idea (yet) which of these alternatives is more correct. The first adds species scores as vectors of their linear increase, the second as their weighted averages in ordination space, but expanded so that we allow some species be more extreme than the site units where they occur.
These will add new element species to the result object ord. However, using these in vegan would need more coding, but you can extract the species scores with vegan::scores, but their scaling is based on the original scale of community data, and may be badly scaled with respect to points of site units, and working on this would require more work. However, you can plot them separately, or then multiply with a constant giving similar scaling as site unit scores.
sp <- scores(ord, display="species", choices=1:2)
plot(sp, type = "n", asp = 1) # does not allow plotting text
text(sp, labels = rownames(sp)) # so we must add text

How do I use the group argument for the plot_summs() function from the jtools package?

I am plotting my coefficient estimates using the function plot_summs() and would like to divide my coefficients into two separate groups.
The function plot_summs() has an argument groups, however, when I try to use it as explained in the documentation, I do not get any results nor error. Can someone give me an example of how I can use this argument please?
This is the code I currently have:
plot_summs(model.c, scale = TRUE, groups = list(pane_1 = c("AQI_average", "temp_yearly"), pane_2 = c("rain_1h_yearly", "snow_1h_yearly")), coefs = c("AQI Average"= "AQI_average", "Temperature (in Farenheit)" = "temp_yearly","Rain volume in mm" = "rain_1h_yearly", "Snow volume in mm" = "snow_1h_yearly"))
And the image below is what I get as a result. What I would like to get is to have two panes separate panes. One which would include "AQI_average" and "temp_yearly" and the other one that would have "rain_1h_yearly" and "snow_1h_yearly". Event though I use the groups argument, I do not get this.
Output of my code
By minimal reproducible example, markus is refering to a piece of code that enables others to exactly reproduce the issue you are refering to on our respective computers, as described in the link that they provided.
To me, it seems the problem is that the groups function does not seem to work in plot_summs - it seems someone here also pointed it out.
If plot_summs is replaced by plot_coef, the groups function work for me. However, the scale function does not seem to be available. A workaround might be:
r <- lm(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width, data = iris)
y <- plot_summs(r, scale = TRUE) #Plot for scaled version
t <- plot_coefs(r, #Plot for unscaled versions but with facetting
groups =
list(
pane_1 = c("Sepal.Width", "Petal.Length"),
pane_2 = c("Petal.Width"))) + theme_linedraw()
y$data$group <- t$data$group #Add faceting column to data for the plot
t$data <- y$data #Replace the data with the scaled version
t
I hope this is what you meant!

Prevent plot.gam from producing a figure

Say, I have a GAM that looks like this:
# Load library
library(mgcv)
# Load data
data(mtcars)
# Model for mpg
mpg.gam <- gam(mpg ~ s(hp) + s(wt), data = mtcars)
Now, I'd like to plot the GAM using ggplot2. So, I use plot.gam to produce all the information I need, like this:
foo <- plot(mpg.gam)
This also generates an unwanted figure. (Yes, I realise that I'm complaining that a plotting function plots something...) When using visreg in the same way, I'd simply specify plot = FALSE to suppress the figure, but plot.gam doesn't seem to have this option. My first thought was perhaps invisible would do the job (e.g., invisible(foo <- plot(mpg.gam))), but that didn't seem to work. Is there an easy way of doing this without outputting the unwanted figure to file?
Okay, so I finally figured it out 5 minutes after posting this. There is an option to select which term to plot (e.g., select = 1 is the first term, select = 2 is the second), although the default behaviour is to plot all terms. If, however, I use select = 0 it doesn't plot anything and doesn't give an error, yet returns exactly the same information. Check it out:
# Load library
library(mgcv)
# Load data
data(mtcars)
# Model for mpg
mpg.gam <- gam(mpg ~ s(hp) + s(wt), data = mtcars)
# Produces figures for all terms
foo1 <- plot(mpg.gam)
# Doesn't produce figures
foo2 <- plot(mpg.gam, select = 0)
# Compare objects
identical(foo1, foo2)
[1] TRUE
Bonza!

Advise a Chemist: Automate/Streamline his Voltammetry Data Graphing Code

I am a chemist dealing with a significant amount of voltammetry data recently. Let me be very clear and give some research information. I run scans from a starting voltage to an ending voltage on solid state conductive films. These scans are saved as .txt files (name scheme: run#.txt) in a single folder. I am looking at how conductance changes as temperature changes. The LINEST line plotting current v. voltage at a given temperature gives me a line with slope = conductance. Once I have the conductances (slopes) for each scan, I plot conductance v. temperature to see the temperature dependent conductance characteristics. I had been doing this in Excel, but have found quicker ways to get the job done using R. I am brand new to R (Rstudio) and recognize that my coding is not the best. Without doubt, this process can be streamlined and sped up which would help immensely. This is how I am performing the process currently:
# Set working directory with folder containing all .txt files for inspection
# Add all .txt files to the global environment
allruns<-list.files(pattern=".txt")
for(i in 1:length(allruns))assign(allruns[i],read.table(allruns[i]))
Since the voltage column (a 1x1000 matrix) is the same for all runs and is in column V1 of each .txt file, I assign a x to be the voltage column from the first folder
x<-run1.txt$V1
All currents (these change as voltage changes) are found in the V2 column of all the .txt files, so I assign y# to each. These are entered one at a time..
y1<-run1.txt$V2
y2<-run2.txt$V2
y3<-run3.txt$V2
# ...
yn<-runn.txt$V2
So that I can get the eqn for each LINEST (one LINEST for each scan and plotted with abline later). Again entered one at a time:
run1<-lm(y1~x)
run2<-lm(y2~x)
run3<-lm(y3~x)
# ...
runn<-lm(yn~x)
To obtain a single graph with all LINEST (one for each scan ) on the same plot, without the data points showing up, I have been using this pattern of coding to first get all data points on a single plot in separate series:
plot(x,y1,col="transparent",main="LSV Solid Film", xlab = "potential(V)",ylab="current(A)", xlim=rev(range(x)),ylim=range(c(y3,yn)))
par(new=TRUE)
plot(x,y2,col="transparent",main="LSV Solid Film", xlab = "potential(V)",ylab="current(A)", xlim=rev(range(x)),ylim=range(c(y3,yn)))
par(new=TRUE)
plot(x,y3,col="transparent",main="LSV Solid Film", xlab = "potential(V)",ylab="current(A)", xlim=rev(range(x)),ylim=range(c(y1,yn)))
# ...
par(new=TRUE)
plot(x,yn,col="transparent",main="LSV Solid Film", xlab = "potential(V)",ylab="current(A)", xlim=rev(range(x)),ylim=range(c(y1,yn)))
#To obtain all LINEST lines (one for each scan, on the single graph):
abline(run1,col=””, lwd=1)
abline(run2,col=””,lwd=1)
abline(run3,col=””,lwd=1)
# ...
abline(runn,col=””,lwd=1)
# Then to get each LINEST equation:
summary(run1)
summary(run2)
summary(run3)
# ...
summary(runn)
Each time I use summary(), I copy the slope and paste it into an Excel sheet- along with corresponding scan temp which I have recorded separately. I then graph the conductance v temp points for the film as X-Y scatter with smooth lines to give the temperature dependent conductance curve. Giving me a single LINEST lines plot in R and the conductance v temp in Excel.
This technique is actually MUCH quicker than doing it all in Excel, but it can be done much quicker and efficiently!!! Also, if I need to change something, this entire process needs to be reexecuted with whatever change is necessary. This process takes me maybe 5 hours in Excel and 1.5 hours in R (maybe I am too slow). Nonetheless, any tips to help automate/streamline this further are greatly appreciated.
There are plenty of questions about operating on data in lists; storing a list of matrix or a list of data.frame is fast, and code that operates cleanly on one can be applied to the remaining n-1 very easily.
(Note: the way I'm showing it here is one technique: maintaining everything in well-compartmentalized lists. Other will suggest -- very justifiably -- that combing things into a single data.frame and adding a group variable (to identify from which file/experiment the data originated) will help with more advanced multi-experiment regression or combined plotting, such as with ggplot2. I'm not going to go into this latter technique here, not yet.)
It is long decried not to do for(...) assign(..., read.csv(...)); you have the important part done, so this is relatively easy:
allruns <- sapply(list.files(pattern = "*.txt"), read.table, simplify = FALSE)
(The use of sapply(..., simplify=FALSE) is similar to lapply(...), but it has a nice side-effect of naming the individual list-ified elements with, in this case, each filename. It may not be critical here but is quite handy elsewhere.)
Extracting your invariant and variable data is simple enough:
allLMs <- lapply(allruns, function(mdl) lm(V2 ~ V1, data = mdl))
I'm using each table's V1 here instead of a once-extracted x ... though you might wonder why, I argue keeping it like for two reasons: (1) JUST IN CASE the V1 variable is ever even one-row-different, this will save you; (2) it is very easy to construct the model like this.
At this point, each object within allLMs is an lm object, meaning we might do:
summary(allLMs[[1]])
Plotting: I think I understand why you are using par=NEW, and I have to laugh ... I had been deep in R for a while before I started using that technique. What I think you need is actually much simpler:
xlim <- rev(range(allruns[[1]]$V1))
ylim <- range(sapply(allruns, `[`, "V2"))
# this next plot just sets the box and axes, no points
plot(NA, type = "na", xlim = xlim, ylim = ylim)
# no need to plot points with "transparent" ...
ign <- sapply(allLMs, abline, col = "") # and other abline options ...
Copying all models into Excel, again, using lists:
out <- do.call(rbind, sapply(allLMs, function(m) summary(m)$coefficients[,1]))
This will now be a single data.frame with all coefficients in two columns. (Feel free to use similar techniques to extract the other model summary attributes, including std err, t.value, or Pr(>|t|) (in the $coefficients); or $r.squared, $adj.r.squared, etc.)
write.csv(out, file="clipboard", sep="\t")
and paste into Excel. (Or, better yet, save it to a CSV file and import that, since you might want to keep it around.)
One of the tricks to using lists for this is to persevere: keep things in lists as long as you can, so that you don't have deal with models individually. One mantra is that if you do it once, you shouldn't have to type it again, just loop/apply/map/whatever. Don't extract too much from the lists before you have to.
Note: r2evans' answer provides good general advice and doesn't require heavy package dependencies. But it probably doesn't hurt to see alternative strategies.
The tidyverse can be quite handy for this sort of thing, here's a dummy example for illustration,
library(tidyverse)
# creating dummy data files
dummy <- function(T) {
V <- seq(-5, 5, length=20)
I <- jitter(T*V + T, factor = 1)
write.table(data.frame(V=V, I = I),
file = paste0(T,".txt"),
row.names = FALSE)
}
purrr::walk(300:320, dummy)
# reading
lf <- list.files(pattern = "\\.txt")
read_one <- function(f, ...) {cbind(T = as.numeric(gsub("\\.txt", "", f)), read.table(f, ...))}
m <- purrr::map_df(lf, read_one, header = TRUE, .id="id")
head(m)
ggplot(m, aes(V, I, group = T)) +
facet_wrap( ~ T) +
geom_point() +
geom_smooth(se = FALSE)
models <- m %>%
split(.$T) %>%
map(~lm(I ~ V, data = .))
coefs <- models %>% map_df(broom::tidy, .id = "T")
ggplot(coefs, aes(as.numeric(T), estimate)) +
geom_line() +
facet_wrap(~term, scales = "free")

Identify spikes/peaks in density plot by group

I created a density plot with ggplot2 package for R. I would like to identify the spikes/peaks in the plot which occur between 0.01 and 0.02. There are too many legends to pick it out so I deleted all legends. I tried to filter my data out to find most number of rows that a group has between 0.01 and 0.02. Then I filtered out the selected group to see whether the spike/peak is gone but no, it is there plotted still. Can you suggest a way to identify these spikes/peaks in these plots?
Here is some code :
ggplot(NumofHitsnormalized, aes(NumofHits_norm, fill = name)) + geom_density(alpha=0.2) + theme(legend.position="none") + xlim(0.0 , 0.15)
## To filter out the data that is in the range of first spike
test <- NumofHitsnormalized[which(NumofHitsnormalized$NumofHits_norm > 0.01 & NumofHitsnormalized$NumofHits_norm <0.02),]
## To figure it out which group (name column) has the most number of rows ##thus I thought maybe I could get the data that lead to spike
testMatrix <- matrix(ncol=2, nrow= length(unique(test$name)))
for (i in 1:length(unique(test$name))){
testMatrix[i,1] <- unique(test$name)[i]
testMatrix[i,2] <- nrow(unique(test$name)[i])}
Konrad,
This is the new plot made after I filtered my data out with extremevalues package. There are new peaks and they are located at different intervals and it also says 96% of the initial groups have data in the new plot (though number of rows in filtered data reduced to 0.023% percent of the initial dataset) so I cant identify which peaks belong to which groups.
I had a similar problem to this.
How i did was to create a rolling mean and sd of the y values with a 3 window.
Calculate the average sd of your baseline data ( the data you know won't have peaks)
Set a threshold value
If above threshold, 1, else 0.
d5$roll_mean = runMean(d5$`Current (pA)`,n=3)
d5$roll_sd = runSD(x = d5$`Current (pA)`,n = 3)
d5$delta = ifelse(d5$roll_sd>1,1,0)
currents = subset(d5,d5$delta==1,na.rm=TRUE) # Finds all peaks
my threshold was a sd > 1. depending on your data you may want to use mean or sd. for slow rising peaks mean would be a better idea than sd.
Without looking at the code, I drafted this simple function to add TRUE/FALSE flags to variables indicating outliers:
GenerateOutlierFlag <- function(x) {
# Load required packages
Vectorize(require)(package = c("extremevalues"), char = TRUE)
# Run check for ouliers
out_flg <- ifelse(1:length(x) %in% getOutliers(x, method = "I")$iLeft,
TRUE,FALSE)
out_flg <- ifelse(1:length(x) %in% getOutliers(x, method = "I")$iRight,
TRUE,out_flg)
return(out_flg)
}
If you care to read about the extremevalues package you will see that it provides some flexibility in terms of identifying outliers but broadly speaking it's a good tool for finding various peaks or spikes in the data.
Side point
You could actually optimise it significantly by creating one object corresponding to getOutliers(x, method = "I") instead of calling the method twice.
More sensible syntax
GenerateOutlierFlag <- function(x) {
# Load required packages
require("extremevalues")
# Outliers object
outObj <- getOutliers(x, method = "I")
# Run check for ouliers
out_flg <- ifelse(1:length(x) %in% outObj$iLeft,
TRUE,FALSE)
out_flg <- ifelse(1:length(x) %in% outObj$iRight,
TRUE,out_flg)
return(out_flg)
}
Results
x <- c(1:10, 1000000, -99099999)
table(GenerateOutlierFlag(x))
FALSE TRUE
10 2

Resources