Looping histograms AND subsets in R and printing to pdf - r

this is the first question I have asked on Stack Overflow. However, I am a student and have been using this website for several years without needing to ask a question. There is a wealth of information on here and I appreciate the people who take the time to answer questions. If I need to make any changes to the question or format of the question I will be more than happy to.
I am researching habitat use by a wildlife species. I conducted field studies on GPS collared animals and took vegetative measurements in the field and landscape measurements in GIS.
Currently, I need to classify each plot (unique.id) into a “forest type” (i.e., Douglas-fir low- elevation forest, ponderosa pine woodland, etc) based on the attributes of the plot. The "forest type" is arbitrary and created by me. I am not using to R classify for me, just to provide visual aids and summary statistics on each plot.
To aid in this, I would like to display a histogram of the tree diameter distributions by tree species for each plot. In the same image window, I would like to display a few other variables from that plot such as canopy cover (canopy), stand age (age), species of the tree that age was taken from (agespecies), elevation (Elev), aspect (Aspect), and stem density (density). Due to the large number of plots, I would be nice to print them all to a pdf or other format for review outside of R.
I am not looking for R to classify the plots for me, just to provide some summary and visual information for each plot to assist my in classifying it.
So far, I have been using the “histogram” function in the “lattice” library, but am open to using something different. I have been able to write code to build a diameter histogram and loop it for each plot. I have also been able to add a subset if I am just running one plot at a time, but I don’t know how to loop the subset. I also am unsure of how to add multiple subsets (canopy, age, agespecies, Elev, Aspect, density) to the histogram.
Finally, most plots do not contain every possible tree species. Is there is a way to order the histograms by which species has the highest number of counts and/or not show that histograms that are empty?
I have pasted my code so far and the structure of the data below. The data are in two separate files, “dbh” and “masterplot”
Data:
> str(dbh)
'data.frame': 80719 obs. of 7 variables:
$ unique.id: Factor w/ 1165 levels "CalvA1","CalvA10",..: 1 1 1 1 1 1 1 1 1 1 ...
$ species : Factor w/ 14 levels "abla","abpr",..: 1 2 3 3 4 4 5 5 5 7 ...
$ dbh : num 7.8 1.1 3.3 3.8 4.1 3.4 6.1 4.2 3.2 3.8 ...
str(masterplot)
'data.frame': 1170 obs. of 41 variables:
$ unique.id : Factor w/ 1165 levels "CalvA1","CalvA10",..: 1 2 3 4 5 6 7 8 9 10 ...
$ canopy : num 16 19 28 25 1 3 23 14 7 18 ...
$ age : num 147 72 167 64 153 144 192 154 173 44 ...
$ agespecies : Factor w/ 14 levels "abla","alru",..: 6 11 7 11 7 6 11 6 11 6 ...
$ Elev : num 1597 1850 1638 1540 1695 ...
$ Aspect : num 238.6 246.1 165.5 242.1 24.4 ...
$ density : num 8700 6600 6800 7800 14600 5600 13900 4600 3900 4000 ...
Code:
lathist.fx=function(x){
windows()
histogram(~dbh[unique.id==x] | species, breaks=c(0,4,11,50),data=dbh,)}
for (i in dbh$unique.id)
lathist.fx(i)
I think the subsets will look something like this…
sub=masterplot$age[masterplot$unique.id=="LeftA36"]

Related

"Number of observations <= number of random effects" error

I am using a package called diagmeta for meta-analysis purposes. I can use this package with a built in data set called Schneider2017. However when I make my own database/data set I get the following error:
Error: number of observations (=300) <= number of random effects (=3074) for term (Group * Cutoff | Study); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable
Another thread here on SO suggests the error is caused by the data format of one or more columns. I have made sure every column's data type matches that in the Schneider2017 dataset - no effect.
Link to the other thread
I have tried extracting all of the data from the Schneider2017 dataset into excel and then importing a dataset from Excel through R studio. This again makes no difference. This suggests to me that something in the data format could be different, although I can't see how.
diag2 <- diagmeta(tpos, fpos, tneg, fneg, cutpoint,
studlab = paste(author,year,group),
data = SRschneider,
model = "DIDS", log.cutoff = FALSE,
check.nobs.vs.nRE = "ignore")
The dataset looks like this:
I expected the same successful execution and plotting as with the built-in data set, but keep getting this error.
Result from doing str(mydataset):
> str(SRschneider)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 150 obs. of 10 variables:
$ ...1 : num 1 2 3 4 5 6 7 8 9 10 ...
$ study_id: num 1 1 1 1 1 1 1 1 1 1 ...
$ author : chr "Arora" "Arora" "Arora" "Arora" ...
$ year : num 2006 2006 2006 2006 2006 ...
$ group : chr NA NA NA NA ...
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
$ tpos : num 133 131 130 127 119 115 113 110 102 98 ...
$ fneg : num 5 7 8 11 19 23 25 28 36 40 ...
$ fpos : num 34 33 31 30 28 26 25 21 19 19 ...
$ tneg : num 0 1 3 4 6 8 9 13 15 15 ...
Just a quick follow-up on Ben's detailed answer.
The statistical method implemented in diagmeta() expects that argument cutpoint is a continuous variable. We added a corresponding check for argument cutpoint (as well as arguments TP, FP, TN, and FN) in version 0.3-1 of R package diagmeta; see commit in GitHub repository for technical details.
Accordingly, the following R commands will result in a more informative error message:
data(Schneider2017)
diagmeta(tpos, fpos, tneg, fneg, as.character(cutpoint),
studlab = paste(author, year, group), data = Schneider2017)
You said that you
have made sure every column's data type matches that in the Schneider2017 dataset
but that doesn't seem to be true. Besides differences between num (numeric) and int (integer) types (which actually aren't typically important), your data has
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
while str(Schneider2017) has
$ cutpoint: num 6 7 8 9 10 11 12 13 14 15 ...
Having your cutpoint be a character rather than numeric means that R will try to treat it as a categorical variable (with many discrete levels). This is very likely the source of your problem.
The cutpoint variable is likely a character because R encountered some value in this column that can't be interpreted as numeric (something as simple as a typographic error). You can use SRschneider$cutpoint <- as.numeric(SRschneider$cutpoint) to convert the variable to numeric by brute force (values that can't be interpreted will be set to NA), but it would be better to go upstream and see where the problem is.
If you use tidyverse packages to load your data you should get a list of "parsing problems" that may be useful. You can also try cp <- SRschneider$cutpoint; cp[which(is.na(as.numeric(cp)))] to look at the values that can't be converted.

Error in R code - c50 code called exit with value 1

So, I'm new to machine learning in R. I'm trying the Kaggle Home Depot product search relevance competition in R.
The structure of my training data set is -
'data.frame': 74067 obs. of 6 variables:
$ id : int 2 3 9 16 17 18 20 21 23 27 ...
$ product_uid : int 100001 100001 100002 100005 100005 100006
100006 100006 100007 100009 ...
$ product_title: Factor w/ 53489 levels "# 62 Sweetheart 14 in. Low
Angle Jack Plane",..: 44305 44305 5530 12404 12404 51748 51748 51748
30638 25364 ...
$ search_term : Factor w/ 11795 levels "$ hole saw",". exterior floor
stain",..: 1952 6411 3752 8652 9528 3499 7146 7148 4417 7026 ...
$ relevance : Factor w/ 13 levels "1","1.25","1.33",..: 13 10 13 9
11 13 11 13 11 13 ...
$ levsim1 : num 0.1818 0.1212 0.0886 0.1795 0.2308 ...
where levsim1 is the vector containing Levenshtein similarity coefficients after comparing the search term and product name. The target value is the relevance and I have tried using the C50 package in R for modeling this training set. However once I run this command:
relevance_model <- C5.0(train.combined[,-5],train.combined$relevance)
(the relevance vector is in the factor format with 13 levels). My computer hangs for about 15 - 20 minutes because of the computations in R, and I later get this message in R:
c50 code called exit with value 1
I know that this error is common if there are empty cells, however no cells are empty in the data set.
I'm not sure if I'm using the wrong kind of data for this package. If some one could shed light on why I'm getting this error, or what to read up on in terms of how to model this data set, that would be great.

Ranking entries in a column based on sums of entries in another column

everyone. I am a beginner in R with a question I can't quite figure out. I've created multiple queries within Stack Overflow to address my question (links to results here, here, and here) but none have addressed my issue. On to the problem: I have subset data frame DAV from a larger dataset.
> str(DAV)
'data.frame': 994 obs. of 9 variables:
$ MIL.ID : Factor w/ 18840 levels "","0000151472",..: 7041 9258 10513 5286 5759 5304 5312 5337 5337 5547 ...
$ Name : Factor w/ 18395 levels ""," Atticus Finch",..: 1226 6754 12103 17234 2317 14034 15747 4542 4542 14819 ...
$ Center : int 2370 2370 2370 2370 2370 2370 2370 2370 2370 2370 ...
$ Gift.Date : Factor w/ 339 levels "","01/01/2015",..: 6 6 6 7 10 13 13 13 13 13 ...
$ Gift.Amount: num 100 47.5 150 41 95 ...
$ Solic. : Factor w/ 31 levels "","aa","ac","an",..: 20 31 20 29 20 8 28 8 8 8 ...
$ Tender : Factor w/ 10 levels "","c","ca","cc",..: 3 2 3 5 2 9 3 9 9 9 ...
$ Account : Factor w/ 16 levels "","29101-0000",..: 4 4 4 11 2 11 2 11 2 11 ...
$ Restriction: Factor w/ 258 levels "","AAU","ACA",..: 216 59 216 1 137 1 137 1 38 1 ...
The two relevant columns for my issue are MIL.ID, which contains a unique ID for a donor, and Gift.Amount, which contains a dollar amount for a single gift the donor gave. A single MIL.ID is often associated with multiple Gift.Amount entries, meaning that donor has given on multiple different occasions for various amounts. Here is what I want to do:
I want to separate out the above mentioned columns from the rest of the data frame;
I want to sum(Gift.Amount) but only do so for each donor, i.e. I want to create a sum of all gifts for MIL.ID 1234 in the above data.frame; and
I want to rank all the MIL.IDs based on the sum Gift.Amount entries associated with their ID.
I apologize for how basic this is, and if it is redundant to a question already asked, but I couldn't find anything.
Edit to address comment:
shot of table
> print(ranking)
Desired output
I am struggling to get the formatting correct here so I included screen shots
This should do it:
df <- DAV[, c("MIL.ID", "Gift.Amount")] #extract columns
df <- aggregate(Gift.Amount ~ MIL.ID, df, sum) #sum amounts with same ID
df <- df[ order(df$Gift.Amount,decreasing = TRUE), ] #sort Decreasing

How do I create a survival object in R?

The question I am posting here is closely linked to another question I posted two days ago about gompertz aging analysis.
I am trying to construct a survival object, see ?Surv, in R. This will hopefully be used to perform Gompertz analysis to produce an output of two values (see original question for further details).
I have survival data from an experiment in flies which examines rates of aging in various genotypes. The data is available to me in several layouts so the choice of which is up to you, whichever suits the answer best.
One dataframe (wide.df) looks like this, where each genotype (Exp, of which there is ~640) has a row, and the days run in sequence horizontally from day 4 to day 98 with counts of new deaths every two days.
Exp Day4 Day6 Day8 Day10 Day12 Day14 ...
A 0 0 0 2 3 1 ...
I make the example using this:
wide.df2<-data.frame("A",0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2)
colnames(wide.df2)<-c("Exp","Day4","Day6","Day8","Day10","Day12","Day14","Day16","Day18","Day20","Day22","Day24","Day26","Day28","Day30","Day32","Day34","Day36")
Another version is like this, where each day has a row for each 'Exp' and the number of deaths on that day are recorded.
Exp Deaths Day
A 0 4
A 0 6
A 0 8
A 2 10
A 3 12
.. .. ..
To make this example:
df2<-data.frame(c("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A"),c(0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2),c(4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36))
colnames(df2)<-c("Exp","Deaths","Day")
Each genotype has approximately 50 flies in it. What I need help with now is how to go from one of the above dataframes to a working survival object. What does this object look like? And how do I get from the above to the survival object smoothly?
After noting the total of Deaths was 55 and you said that the number of flies was "around 50", I decided the likely assumption was that this was a completely observed process. So you need to replicate the duplicate deaths so there is one row for each death and assign an event marker of 1. The "long" format is clearly the preferred format. You can then create a Surv-object with the 'Day' and 'event'
?Surv
df3 <- df2[rep(rownames(df2), df2$Deaths), ]
str(df3)
#---------------------
'data.frame': 55 obs. of 3 variables:
$ Exp : Factor w/ 1 level "A": 1 1 1 1 1 1 1 1 1 1 ...
$ Deaths: num 2 2 3 3 3 1 3 3 3 4 ...
$ Day : num 10 10 12 12 12 14 16 16 16 18 ...
#----------------------
df3$event=1
str(with(df3, Surv(Day, event) ) )
#------------------
Surv [1:55, 1:2] 10 10 12 12 12 14 16 16 16 18 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:2] "time" "status"
- attr(*, "type")= chr "right"
Note: If this were being done in the coxph function, the expansion to individual lines of date might not have been needed, since that function allows specification of case weights. (I'm guessing that the other regression function in the survival package would not have needed this to be done either.) In the past Terry Therneau has expressed puzzlement that people are creating Surv-objects outside the formula interface of the coxph. The intended use of htis Surv-object was not described in sufficient detail to know whether a weighted analysis without exapnsion were possible.

appending new data to specific elements in lists in r

Please correct me if my terminology is wrong because on this question Im not quite sure what Im dealing with regarding elements, objects, lists..I just know its not a data frame.
Using the example from prepksel {adehabitatHS} I am trying to modify my own data to fit into their package. Running this command on their example data creates an object? called x which is a list with 3 sections? elements? to it.
The example data code:
library(adehabitatHS)
data(puechabonsp)
locs <- puechabonsp$relocs
map <- puechabonsp$map
pc <- mcp(locs[,"Name"])
hr <- hr.rast(pc, map)
cp <- count.points(locs[,"Name"], map)
x <- prepksel(map, hr, cp)
looking at the structure of x it is a list of 3 elements called tab, weight, and factor
str(x)
List of 3
$ tab :'data.frame': 191 obs. of 4 variables:
..$ Elevation : num [1:191] 141 140 170 160 152 121 104 102 106 103 ...
..$ Aspect : num [1:191] 4 4 4 1 1 1 1 1 4 4 ...
..$ Slope : num [1:191] 20.9 18 17 24 23.9 ...
..$ Herbaceous: num [1:191] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 ...
$ weight: num [1:191] 1 1 1 1 1 2 2 4 0 1 ...
$ factor: Factor w/ 4 levels "Brock","Calou",..: 1 1 1 1 1 1 1 1 1 1 ...
for my data, I will create multiple "x" lists and want to merge the data within each segment. So, I have created an "x" for year 2007, 2008 and 2009. Now, I want to append the "tab" element of 08 to 07, then 09 to 07/08. and do the same for the "weight" and "factor" elements of this list "x". How do you bind that data? I thought about using unlist on each segment of the list and then appending and then joining the yearly data for each segment and then rejoining the three segments back into one list. But this was cumbersome and seemed rather inefficient.
I know this is not how it will work, but in my head this is what I should be doing:
newlist<-append(x07$tab, x08$tab, x09$tab)
newlist<-append(x07$weight, x08$weight, x09$weight)
newlist<-append(x07$factor, x08$factor, x09$factor)
maybe rbind? do.call("rbind", lapply(....uh...stuck
append works for vectors and lists, but won't give the output you want for data frames, the elements in your list (and they are lists) are of different types. Something like
tocomb <- list(x07,x08,x09)
newlist <- list(
tab = do.call("rbind",lapply(tocomb,function(x) x$tab)),
weight = c(lapply(tocomb,function(x) x$weight),recursive=TRUE),
factor = c(lapply(tocomb,function(x) x$factor),recursive=TRUE)
)
You may need to be careful with factors if they have different levels - something like as.character on the factors before converting them back with as.factor.
This isn't tested, so some assembly may be required. I'm not an R wizard, and this may not be the best answer.

Resources