appending new data to specific elements in lists in r - r

Please correct me if my terminology is wrong because on this question Im not quite sure what Im dealing with regarding elements, objects, lists..I just know its not a data frame.
Using the example from prepksel {adehabitatHS} I am trying to modify my own data to fit into their package. Running this command on their example data creates an object? called x which is a list with 3 sections? elements? to it.
The example data code:
library(adehabitatHS)
data(puechabonsp)
locs <- puechabonsp$relocs
map <- puechabonsp$map
pc <- mcp(locs[,"Name"])
hr <- hr.rast(pc, map)
cp <- count.points(locs[,"Name"], map)
x <- prepksel(map, hr, cp)
looking at the structure of x it is a list of 3 elements called tab, weight, and factor
str(x)
List of 3
$ tab :'data.frame': 191 obs. of 4 variables:
..$ Elevation : num [1:191] 141 140 170 160 152 121 104 102 106 103 ...
..$ Aspect : num [1:191] 4 4 4 1 1 1 1 1 4 4 ...
..$ Slope : num [1:191] 20.9 18 17 24 23.9 ...
..$ Herbaceous: num [1:191] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 ...
$ weight: num [1:191] 1 1 1 1 1 2 2 4 0 1 ...
$ factor: Factor w/ 4 levels "Brock","Calou",..: 1 1 1 1 1 1 1 1 1 1 ...
for my data, I will create multiple "x" lists and want to merge the data within each segment. So, I have created an "x" for year 2007, 2008 and 2009. Now, I want to append the "tab" element of 08 to 07, then 09 to 07/08. and do the same for the "weight" and "factor" elements of this list "x". How do you bind that data? I thought about using unlist on each segment of the list and then appending and then joining the yearly data for each segment and then rejoining the three segments back into one list. But this was cumbersome and seemed rather inefficient.
I know this is not how it will work, but in my head this is what I should be doing:
newlist<-append(x07$tab, x08$tab, x09$tab)
newlist<-append(x07$weight, x08$weight, x09$weight)
newlist<-append(x07$factor, x08$factor, x09$factor)
maybe rbind? do.call("rbind", lapply(....uh...stuck

append works for vectors and lists, but won't give the output you want for data frames, the elements in your list (and they are lists) are of different types. Something like
tocomb <- list(x07,x08,x09)
newlist <- list(
tab = do.call("rbind",lapply(tocomb,function(x) x$tab)),
weight = c(lapply(tocomb,function(x) x$weight),recursive=TRUE),
factor = c(lapply(tocomb,function(x) x$factor),recursive=TRUE)
)
You may need to be careful with factors if they have different levels - something like as.character on the factors before converting them back with as.factor.
This isn't tested, so some assembly may be required. I'm not an R wizard, and this may not be the best answer.

Related

"Number of observations <= number of random effects" error

I am using a package called diagmeta for meta-analysis purposes. I can use this package with a built in data set called Schneider2017. However when I make my own database/data set I get the following error:
Error: number of observations (=300) <= number of random effects (=3074) for term (Group * Cutoff | Study); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable
Another thread here on SO suggests the error is caused by the data format of one or more columns. I have made sure every column's data type matches that in the Schneider2017 dataset - no effect.
Link to the other thread
I have tried extracting all of the data from the Schneider2017 dataset into excel and then importing a dataset from Excel through R studio. This again makes no difference. This suggests to me that something in the data format could be different, although I can't see how.
diag2 <- diagmeta(tpos, fpos, tneg, fneg, cutpoint,
studlab = paste(author,year,group),
data = SRschneider,
model = "DIDS", log.cutoff = FALSE,
check.nobs.vs.nRE = "ignore")
The dataset looks like this:
I expected the same successful execution and plotting as with the built-in data set, but keep getting this error.
Result from doing str(mydataset):
> str(SRschneider)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 150 obs. of 10 variables:
$ ...1 : num 1 2 3 4 5 6 7 8 9 10 ...
$ study_id: num 1 1 1 1 1 1 1 1 1 1 ...
$ author : chr "Arora" "Arora" "Arora" "Arora" ...
$ year : num 2006 2006 2006 2006 2006 ...
$ group : chr NA NA NA NA ...
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
$ tpos : num 133 131 130 127 119 115 113 110 102 98 ...
$ fneg : num 5 7 8 11 19 23 25 28 36 40 ...
$ fpos : num 34 33 31 30 28 26 25 21 19 19 ...
$ tneg : num 0 1 3 4 6 8 9 13 15 15 ...
Just a quick follow-up on Ben's detailed answer.
The statistical method implemented in diagmeta() expects that argument cutpoint is a continuous variable. We added a corresponding check for argument cutpoint (as well as arguments TP, FP, TN, and FN) in version 0.3-1 of R package diagmeta; see commit in GitHub repository for technical details.
Accordingly, the following R commands will result in a more informative error message:
data(Schneider2017)
diagmeta(tpos, fpos, tneg, fneg, as.character(cutpoint),
studlab = paste(author, year, group), data = Schneider2017)
You said that you
have made sure every column's data type matches that in the Schneider2017 dataset
but that doesn't seem to be true. Besides differences between num (numeric) and int (integer) types (which actually aren't typically important), your data has
$ cutpoint: chr "6" "7.0" "8.0" "9.0" ...
while str(Schneider2017) has
$ cutpoint: num 6 7 8 9 10 11 12 13 14 15 ...
Having your cutpoint be a character rather than numeric means that R will try to treat it as a categorical variable (with many discrete levels). This is very likely the source of your problem.
The cutpoint variable is likely a character because R encountered some value in this column that can't be interpreted as numeric (something as simple as a typographic error). You can use SRschneider$cutpoint <- as.numeric(SRschneider$cutpoint) to convert the variable to numeric by brute force (values that can't be interpreted will be set to NA), but it would be better to go upstream and see where the problem is.
If you use tidyverse packages to load your data you should get a list of "parsing problems" that may be useful. You can also try cp <- SRschneider$cutpoint; cp[which(is.na(as.numeric(cp)))] to look at the values that can't be converted.

undefined columns selected error in R when trying to subset using sapply

I have been tearing my hair out over this for the last hour, the following code was working perfectly a couple of hours ago, and now I have no idea why it doesn't anymore. I have searched for other questions regarding the undefined columns selected error, but I think I have corrected for all of the info in those answers. I am sure there is some tiny thing I have overlooked or accidently left in, but I can't see it!
I have a data frame with both factor and numeric variables, I want to subset so that I keep all of the factor variables, and remove numeric variables whose columns have a mean < 0.1.
I found the following code on another question on stackoverflow, which slightly modified worked well on my test data (smaller sub-dataset I am using for testing before trying out code on a big 3GB object)
meanfunction01 <- function(x){
if(is.numeric(x)){
mean(x) > 0.1
} else {
TRUE}
}
#then apply function to data table
Zdata <- Data1[,sapply(Data1, meanfunction01)]
I swear I was using this a few hours ago, then when i came back to it and tried to use it again it stopped working and now just returns the following error:
Error in `[.data.frame`(Data1, , sapply(Data1, meanfunction01)) :
undefined columns selected
I was trying to modify the function so that it would loop over multiple objects (I have 54 objects I want to apply it to, and didn't want to type them all manually), but I don't think I edited the original function, and now it has stopped working.
A brief str() of my data:
> str(Data1[1:10])
'data.frame': 11 obs. of 10 variables:
$ Name : Factor w/ 11688 levels "GTEX-1117F-0226-SM-5GZZ7",..: 8186 8242 8262 8270 8343 8388 8403 8621 8689 8709 ...
$ SEX : Factor w/ 2 levels "Female","Male": 1 2 2 1 1 2 2 1 2 1 ...
$ AGE : Factor w/ 6 levels "20-29","30-39",..: 4 4 1 3 3 1 3 3 3 2 ...
$ CIRCUMSTANCES: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Tissue.x : Factor w/ 53 levels "Adipose_Subcutaneous",..: 7 7 7 7 7 7 7 7 7 7 ...
$ ENSG00000223972.4 : num 0 0.0701 0.0339 0.1149 0.0549 ...
$ ENSG00000227232.4 : num 12.5 17.2 13.1 16 15.7 ...
$ ENSG00000243485.2 : num 0.0717 0 0.1508 0 0.061 ...
$ ENSG00000237613.2 : num 0 0.0654 0 0.0402 0.0768 ...
$ ENSG00000268020.2 : num 0 0.0421 0.0611 0 0 ...
So if your only issue is changing the class of the integer variables in your data.frame but you have many columns (>10000) you may want to consider converting your data.frame into a data.table. Your code would then look like this:
library(data.table)
Data1<-data.table(Data1) #or if you have your data in csv document just use fread instead of read.csv which will automatically give you a data.table.
Then you just need to find the integer columns using this:
which(sapply(Data1,is.integer))
Putting it altogether using the data.table commands:
Data1[,which(sapply(Data1,is.integer)):=lapply(.SD,as.numeric),.SDcols=which(sapply(Data1,is.integer))]
Note you don't need to assign the above line of code into anything since data.table uses pointers which makes it much faster than data.frame or tibbles objects. So running the above line will update your Data1 object efficiently. The classes of the other non-integer columns (i.e., factors) will remain unchanged.
Please update if you have further questions but this should answer your comment. Best of luck!

Looping histograms AND subsets in R and printing to pdf

this is the first question I have asked on Stack Overflow. However, I am a student and have been using this website for several years without needing to ask a question. There is a wealth of information on here and I appreciate the people who take the time to answer questions. If I need to make any changes to the question or format of the question I will be more than happy to.
I am researching habitat use by a wildlife species. I conducted field studies on GPS collared animals and took vegetative measurements in the field and landscape measurements in GIS.
Currently, I need to classify each plot (unique.id) into a “forest type” (i.e., Douglas-fir low- elevation forest, ponderosa pine woodland, etc) based on the attributes of the plot. The "forest type" is arbitrary and created by me. I am not using to R classify for me, just to provide visual aids and summary statistics on each plot.
To aid in this, I would like to display a histogram of the tree diameter distributions by tree species for each plot. In the same image window, I would like to display a few other variables from that plot such as canopy cover (canopy), stand age (age), species of the tree that age was taken from (agespecies), elevation (Elev), aspect (Aspect), and stem density (density). Due to the large number of plots, I would be nice to print them all to a pdf or other format for review outside of R.
I am not looking for R to classify the plots for me, just to provide some summary and visual information for each plot to assist my in classifying it.
So far, I have been using the “histogram” function in the “lattice” library, but am open to using something different. I have been able to write code to build a diameter histogram and loop it for each plot. I have also been able to add a subset if I am just running one plot at a time, but I don’t know how to loop the subset. I also am unsure of how to add multiple subsets (canopy, age, agespecies, Elev, Aspect, density) to the histogram.
Finally, most plots do not contain every possible tree species. Is there is a way to order the histograms by which species has the highest number of counts and/or not show that histograms that are empty?
I have pasted my code so far and the structure of the data below. The data are in two separate files, “dbh” and “masterplot”
Data:
> str(dbh)
'data.frame': 80719 obs. of 7 variables:
$ unique.id: Factor w/ 1165 levels "CalvA1","CalvA10",..: 1 1 1 1 1 1 1 1 1 1 ...
$ species : Factor w/ 14 levels "abla","abpr",..: 1 2 3 3 4 4 5 5 5 7 ...
$ dbh : num 7.8 1.1 3.3 3.8 4.1 3.4 6.1 4.2 3.2 3.8 ...
str(masterplot)
'data.frame': 1170 obs. of 41 variables:
$ unique.id : Factor w/ 1165 levels "CalvA1","CalvA10",..: 1 2 3 4 5 6 7 8 9 10 ...
$ canopy : num 16 19 28 25 1 3 23 14 7 18 ...
$ age : num 147 72 167 64 153 144 192 154 173 44 ...
$ agespecies : Factor w/ 14 levels "abla","alru",..: 6 11 7 11 7 6 11 6 11 6 ...
$ Elev : num 1597 1850 1638 1540 1695 ...
$ Aspect : num 238.6 246.1 165.5 242.1 24.4 ...
$ density : num 8700 6600 6800 7800 14600 5600 13900 4600 3900 4000 ...
Code:
lathist.fx=function(x){
windows()
histogram(~dbh[unique.id==x] | species, breaks=c(0,4,11,50),data=dbh,)}
for (i in dbh$unique.id)
lathist.fx(i)
I think the subsets will look something like this…
sub=masterplot$age[masterplot$unique.id=="LeftA36"]

Reading a csv file using ffdf and subsetting it successfully

I have been researching a way to efficiently extract information from large csv data sets using R. Many seem to recommend the package ff. I was successful in reading the data sets but am now running into problem trying to subset it.
The largest data set contains over 650,000 rows and 1005 columns. Not all columns contain the same data types. Viewed as a dataframe, the structure would look like this:
'data.frame': 5 obs. of 1005 variables:
$ SAMPLING_EVENT_ID : Factor w/ 5 levels "S6230404","S6252242",..: 2 1 3 4 5
$ LATITUDE : num 24.4 24.5 24.5 24.5 24.5
$ LONGITUDE : num -81.9 -81.9 -82 -82 -82
$ YEAR : int 2010 2010 2010 2010 2010
$ MONTH : int 4 3 10 10 10
$ DAY : int 97 88 299 298 300
$ TIME : num 9 10 10 11.58 9.58
$ COUNTRY : Factor w/ 1 level "United_States": 1 1 1 1 1
$ STATE_PROVINCE : Factor w/ 1 level "Florida": 1 1 1 1 1
$ COUNT_TYPE : Factor w/ 2 levels "P21","P22": 2 2 1 1 1
$ EFFORT_HRS : num 6 2 7 6 3.5
$ EFFORT_DISTANCE_KM : num 48.28 8.05 0 0 0
$ EFFORT_AREA_HA : int 0 0 0 0 0
$ OBSERVER_ID : Factor w/ 3 levels "obs132426","obs58643",..: 3 2 1 1 1
$ NUMBER_OBSERVERS : Factor w/ 2 levels "?","1": 2 1 2 2 2
$ Zenaida_macroura : int 0 0 1 0 0
All other variables being similar to this last one i.e. various species of bird.
Here is the code I used to “successfully: read the csv:
B2010 <- read.table.ffdf (x = NULL, “filePath&Name", nrows = -1, first.rows = 50000, next.rows = 50000)
Trying to learn about ffdf output, I entered command lines such as dim(B2010), str(B2010), ls(B2010), etc. dim(B2010) resulted in the appropriate number of rows but only one column (a string per record of the values separated by commas), and ls(B2010) outputted “[1] "physical" "row.names" "virtual" instead of the usual list of variables.
I not sure how to handle this type of output to be able to extract say STATE_PROVINCE == “California”? How do I tell B2010 what the variables are? I think I need to look at this differently but need some of your help to figure it out.
The ultimate goal for me is to subset a bunch of csv data sets (since I have one per year) and put the results back together as dataframe for various analysis.
Thanks,
Joe
To subset an ffdf, use the ffbase package.
As in
require(ffbase)
x <- subset(B2010, BB2010$STATE_PROVINCE == “California”)
I finally found the solution to getting the ffdf variable names and types properly read and accessible for subsetting:
B2010 <- read.csv.ffdf (file = "filepath/name", colClasses = c("factor", "numeric", "numeric", "integer", "integer", "integer", "numeric", rep("factor",998)), first.rows = 10000, next.rows = 50000, nrows = -1)
This took forever to read but seemed to have worked i.e. I was able to create a subset of the data. Next step: to save the subset back to a "normal" dataframe and/or to a csv.
According to the help page at ?read.table.ffdf, you should be using read.csv.ffdf(...). Then go to the page cited by Brandon.

Wrong R data type or bad data?

I'm having trouble doing simple functions on a data frame and am unsure whether it's the data type of the column, or bad data in the data frame.
I exported a SQL query into a CSV file, then loaded it into a data frame, then attached it.
df <-read.csv("~/Desktop/orders.csv")
Attach(df)
When I am done, and run str(df), here is what I get:
$ AccountID: Factor w/ 18093 levels "(819947 row(s) affected)",..: 10 97 167 207 207 299 299 309 352 573 ...
$ OrderID : int 1874197767 1874197860 1874196789 1874206918 1874209100 1874207018 1874209111 1874233050 1874196791 1875081598 ...
$ OrderDate : Factor w/ 280 levels "","2010-09-24",..: 2 2 2 2 2 2 2 2 2 2 ...
$ NumofProducts : int 16 6 4 6 10 4 2 4 6 40 ...
$ OrderTotal : num 20.3 13.8 12.5 13.8 16.4 ...
$ SpecialOrder : int 1 1 1 1 1 1 1 1 1 1 ...
Trying to run the following functions, here is what I get:
> length(OrderID)
[1] 0
> min(OrderTotal)
[1] NA
> min(OrderTotal, na.rm=TRUE)
[1] 5.00
> mean(NumofProducts)
[1] NA
> mean(NumofProducts, na.rm=TRUE)
[1] 3.462902
I have two questions related to this data frame:
Do I have the right data types for the columns? Nums versus integers versus decimals.
Is there a way to review the data set to find the rows that are driving the need to use na.rm=TRUE to make the function work? I'd like to know how many there are, etc.
The difference between num and int is pretty irrelevant at this stage.
See help(is.na) for starters on NA handling. Do things like:
sum(is.na(foo))
to see how many foo's are NA values. Then things like:
df[is.na(df$foo),]
to see the rows of df where foo is NA.

Resources