I'm trying to get some data from a large Baseball database in a nice format. It's a MySQL database so I use RMySQL to access it.
Problem is, the easiest way to retrieve the data I need is using sapply, as I need to vary an index:
myf <- function(ab){
search <- paste('select pitch_type, des from pitches where ab_id=', ab)
query <- dbSendQuery(con2,search)
return(fetch(query,n=-1))
}
pitches <- sapply(players$ab_id,myf,simplify="array")
But it is very hard to access this data, as it returns a list of lists:
> mode(pitches[,1])
>[1] "list"
Since I have two columns of different length in each list, is there an easy way to just stack all of these into a matrix/data frame? I have tried many things to no success.
Thanks!
Related
I wrote a for loop to create empty multiple data frames, using a vector of names, but even though it seemed really easy at start I got an error message : Error in ID_names[i] <- data.frame() : replacement has length zero
To be more specific I' ll provide you with a reproducable example:
ID_names <- c("Athens","Rome","Barcelona","London","Paris","Madrid")
for(i in 1:length(ID_names){
ID_names[i] <- data.frame()
}
Do you have any idea why this is wrong? I would like to ask you not only provide a solution, but specify me why this for loop is wrong in order to avoid such kind of mistakes in the future.
You are trying to store a dataframe in one element of a vector (ID_names[i]) which is not possible. You might want to create a list of empty dataframes and assign names to it which can be done using replicate.
ID_names <- c("Athens","Rome","Barcelona","London","Paris","Madrid")
list_data <- setNames(replicate(length(ID_names), data.frame()), ID_names)
However, very rarely such initialisation of empty dataframes will be useful. It ends up creating more confusion down the road. Depending on your actual use case there might be other better ways to handle this.
This is a question of a general approach in R, I'm trying to find a way into R language but the data types and loop approaches (apply, sapply, etc) are a bit unclear to me.
What is my target:
Query data from API with parameters from a config list with multiple parameters. Return the data as aggregated data.frame.
First I want to define a list of multiple vectors (colums)
site segment id
google.com Googleuser 123
bing.com Binguser 456
How to manage such a list of value groups (row by row)? data.frames are column focused, you cant write a data.frame row by row in an R script. So the only way I found to define this initial config table is a csv, which is really an approach I try to avoid, but I can't find a way to make it more elegant.
Now I want to query my data, lets say with this function:
query.data <- function(site, segment, id){
config <- define_request(site, segment, id)
result <- query_api(config)
return result
}
This will give me a data.frame as a result, this means every time I query data the same columns are used. So my result should be one big data.frame, not a list of similar data.frames.
Now sapply allows to use one parameter-list and multiple static parameters. The mapply works, but it will give me my data in some crazy output I cant handle or even understand exactly what it is.
In principle the list of data.frames is ok, the data is correct, but it feels cumbersome to me.
What core concepts of R I did not understand yet? What would be the approach?
If you have a lapply/sapply solution that is returning a list of dataframes with identical columns, you can easily get a single large dataframe with do.call(). do.call() inputs each item of a list as arguments into another function, allowing you to do things such as
big.df <- do.call(rbind, list.of.dfs)
Which would append the component dataframes into a single large dataframe.
In general do.call(rbind,something) is a good trick to keep in your back pocket when working with R, since often the most efficient way to do something will be some kind of apply function that leaves you with a list of elements when you really want a single matrix/vector/dataframe/etc.
I am using R to prepare some data for a D3 visualization. The visualization was created using the following structure (this is a single row from a .csv file that is subsequently converted to JSON in javascript).
Joe.Schmoe, joe.schmoe#email.com, Sao Paulo, ["Community01", "Community02", "Community03"],
["workgroup01","workgroup02"]
This is a single row. The headers would be:
Person, Email, Location, Communities, Workgroups
You'll notice that the Communities and Workgroup columns contain lists. Furthermore, these lists will vary in length depending on what Communities and Workgroups each individual is associated with. I recognize that this is probably not best practice with regard to data "tidyness," but it is what this viz is expecting.
So ... in R (which I'm learning), I'm finding it impossible to recreate this structure because, when I try to populate the "communities" or "workgroups" variables, R seems to expect that each variable will be of equal length.
The code that I have is reading from a data.frame which is list of the members of a particular community, and adding the name of that community to a column in a master data.frame of all employees. I'm indexing by email address because it is unique. So this particular loop looks at each individual email address in a data.frame called "commTD" and finds it in a master data.frame called "testr." If it finds it, it looks at the communities variable and either replaces an NA value with the name of the community (in this case "Technical Design"), or if the vector already exists, appends Technical Design to it:
for(i in commTD$email){
if(i %in% testr$email){
tmpList <- testr[which(testr$email ==i) , 'communities']
if(is.na(tmpList)){
tmpList <- list(c("Technical Design"))
}
else{
tmpList <- append(tmpList[[1]][1], 'Technical Design')
}
testr[which(testr$email ==i) , 'communities'] <- list(tmpList)
}
}
This works fine for the initial replacement, but if I append a new community to the list, and then try to pass it back into the testr data.frame, I get an error:
Error in `[<-.data.frame`(`*tmp*`, which(testr$email == i), "communities",
: replacement has 2 rows, data has 1
You'll note that I'm trying to create a list of vectors, which is just one way I've tried to figure this out. I thought maybe I could force R to see the list as a single object, even though it contains multiple items -- or in this case a vector of multiple items.
Is this just impossible in R, to have varied length vectors or lists as a single variable in a data frame?
Data frames are by definition a list of vectors of equal length, so when you ask if this is possible as a class data.frame(), no its not.
You could either use as suggested another type of object like data.table, or another way would be to think of your desired output as a list of unequal vectors, to pass to your js.
That object would look like something like:
dataList <- list(name = c("Joe.Schmoe", "Joe.Bloe"),
email = c("joe.schmoe#email.com", "joe.bloe#email.com"),
location = c("Sao Paulo", "London"),
Communities = list(c("Community01", "Community02", "Community03"),
c("Community02", "Community05", "Community03")
),
Workgroups = list(c("workgroup01","workgroup02"),
c("workgroup01","workgroup03"))
)
Then access each field like a dataframe, for output to your js:
dataList$name
dataList$Communities
etc...
As per Frank's suggestion, if you want to access each entry via the email address, so you can access each entry like this:
data_list[["joe.schmoe#email.com"]]
...then build the list with the names of the email as the index, like so:
data_list = list(`joe.schmoe#email.com`=list(name="Joe",
location="Sao Paulo",
Communities=....),
`joe.bloe#email.com`=list(name="Joe", ...))
Then, you can avoid the non-R style of using for() loops, and start the fun of the lapply() family of functions to work on all the entries in a vectorised manner. (See ?lapply for details)
Hope it helps.
I have multiple csv-files in one folder. I want to load each csv-file in this folder into one separate data frame. Next, I want to extract certain elements from this data frame into a matrix and calculate the mean of all these matrixes.
setwd("D:\\data")
group_1<-list.files()
a<-length(group_1)
mferg_mean<-data.frame
for(i in 1:a)
{
assign(paste0("mferg_",i),read.csv(group_1[i],header=FALSE,sep=";",quote="",dec=",",col.names=1:90))
}
As there are 11 csv-files in the folder I now have the data frames
mferg_1
to
mferg_11
How can I address each data frame in this loop? As mentioned, I want to extract certain elements from each data frame to a matrix. I would imagine it something like this:
assign(paste0("mferg_matrix_",i),mferg_i[1:5,1:10])
But this obviously does not work because R does not recognize mferg_i in the loop. How can I address this data frame?
This is not something you should probably be using assign for in the first place. Working with a bunch of different data.frames in R is a mess, but working with a list of data.frames is much easier. Try reading your data with
group_1<-list.files()
mferg <- lapply(group_1, function(filename) {
read.csv(filename,header=FALSE,sep=";",quote="",dec=",",col.names=1:90))
})
and you get each each value with mferg[[1]], mferg[[1]], etc. And then you can create a list of extractions with
mferg_matrix <- lapply(mferg, function(x) x[1:5, 1:10])
This is the more R-like way to do things.
But technically you can use get to retrieve values like you use assign to create them. For example
assign(paste0("mferg_matrix_",i),get(paste0("mferg_",i))[1:5,1:10])
but again, this is probably not a smart strategy in the long run.
I am trying to go from a data frame to a list structure in R (and I know technically a data frame is a list). I have a data frame containing reference chemicals and their mechanisms different targets. For example, estrogen is an estrogen receptor agonist. What I would like is to transform the data frame to a list, because I am tired of typing out something like:
refchem$chemical_id[refchem$target=="AR" & refchem$mechanism=="Agonist"]
every time I need to access the list of specific reference chemicals. I would much rather access the chemicals by:
refchem$AR$Agonist
I am looking for a general answer, even though I have given a simplified example, because not all targets have all mechanisms.
This is really easy to accomplish with a loop:
example <- data.frame(target=rep(c("t1","t2","t3"),each=20),
mechan=rep(c("m1","m2"),each=10,3),
chems=paste0("chem",1:60))
oneoption <- list()
for(target in unique(example$target)){
oneoption[[target]] <- list()
for(mech in unique(example$mechan)){
oneoption[[target]][[mech]] <- as.character(example$chems[ example$target==target & example$mechan==mech ])
}
}
I am just wondering if there is a more clever way to do it. I tried playing around with lapply and did not make any progress.
Using split:
split(refchem, list(refchem$target, refchem$mechanism))
Should do the trick.
The new way to access would be refchem$AR.Agonist
If you make a keyed data.table instead, ...
you'll still have all the data in one data.frame (instead of a possibly-nested list of many);
you may find iterating over these subsets nicer; and
the syntax is pretty clean:
To access a subset:
DT[.('AR','Agonist')]
To do something for each group, that will be rbinded together in the result:
DT[,{do stuff},by=key(DT)]
Similar to aggregate(), any list of vectors of the correct length can go into the by, not just the key.
Finally, DT came from...
require(data.table)
DT <- data.table(refchem,key=c('target','mechanism'))
You can also use a plyr function:
library(plyr)
dlply(example, .(target, mechan))
It has the added advantage of using a function to process the data, if needed (there's an implicit identity in the above).