I searched for a solution to my question for awhile but did not see one that I could get working. Basically I have the following situation:
I read a file into a data frame called df1 that has a lot of id (each id can be in the file 80-120 times), date, and numerical data.
I have a script that does a bunch of caluclations and then exports a csv file with the title as the classifcation I have created, an underscore, and the id like below. Each file only contains 1 unique id but is usually 80+ rows.
write.table(df,
file = paste(unique(df$classification), "_", unique(df$id), ".csv"),
sep = ",", row.names = FALSE)
What I am hoping to do is, after I read in the file, get a unique list (I assume this would be a list?) of the id values, and then feed this into the rest of the script one value at a time. So essentially, I would take the first unique id in df1, feed it into the subset function, do a bunch of calculations, and then export the file. Move on to the second unique id, feed it into the subset, do a bunch of calculations, export the file. Rinse and repeat. This seems trivial but I have struggled to find a solution. Any help would be greatly appreciated!
I assume I can put a loop together prior to the line below and then have it loop through the entire script replacing the xxxxxxxxx with a new id each time?
df <- subset(df1, id == xxxxxxxxxxxxxxx)
If I understand your question correctly, you should be able to loop through like this:
for(i in unique(df1$id)){
df <- df1[df1$id == i,]
...
}
Related
Beginner here: I have a list (see screenshot) called Coins_list from which I want to export the second dataframe stored in it called data into a csv. When I use the code
write.csv(Coins_list$data, file = "Coins_list_full_data.csv")
I get a huge CSV with a bunch of numbers from the column named price which apparently containts more dataframes, if I read the output correctly or at least display the data in the price column? How can I export this dataframe into CSV correctly? See screenshot for more details.
EDIT: I was able to get the first four rows into CSV by using df2 <- Coins_list$data write.csv(df2[1:4,], file="BTC_row.csv"), however it now looks like R puts the price of all four rows within a list c( ) and repeats it in each row? Any idea how to change that?
(I would post this as a comment but I have too few reputation)
Hey, you could try for starters to flatten the json file by going further than response list$content but looking at what's into the content with another $.
Else you could try getting data$price and see what pops up from there.
something like this:
names = list(data$symbol)
df = data.frame(price = NA, symbol = NA)
for (i in length(data)) {
x = data.frame(price = data$price[i], symbol = names[i])
df = inner_join(df, data)
}
to get a dataframe with price and symbol. I don't know how the data is nested so I'm just guessing.
It would be helpful to know from where you got the data for reproducibility.
Does anyone know the best way to carry out a "for loop" that would read in different subject id's and append them to the name of an exported csv?
As an example, I have multiple output files from an electrocardiogram software program (each file belongs to one individual). The files are named C800_HR.bdf.evt, C801_HR.bdf.evt, C802_HR.bdf.evt etc. Each file gets read into r and then has a script applied to calculate heart rate variability. At the end of the script, I need to add a loop that will extract the subject id (e.g., C800, C801, C802) and write a new file name for each individual so that it becomes C800_RtoR.csv. Essentially, I would like to avoid changing the syntax every time I read in and export a file name.
I am currently using the following syntax to read in multiple files:
>setwd("/Users/kmpc/Downloads")
>myhrvdata <-lapply(Sys.glob("C8**_HR.bdf.evt"), read.delim)
Try this out:
cardio_files <- list.files(pattern = "C8\\d{2}_HR.bdf.evt")
subject_ids <- sub("^(C8\\d{2})_.*", "\\1" cardio_files)
myList <- lapply(cardio_files, read.delim)
## do calculations on the list
for (i in names(myList)) {
write.csv(myList[[i]], paste0(subject_ids[i], "_RtoR.csv"))
}
The only thing is, you have to deal with using a list when doing your calculations. You could combine them to a single data.frame, but it would be best to leave it as a list to write the files at the end.
Consider generalizing your process by creating a function that: 1) reads in file, 2) processes data, 3) outputs to csv. Then have lapply call the defined method iteratively across all Sys.glob items and even return a list of calculated data frames.
proc_heart_rate <- function(f_name) {
# READ IN .evt FILE INTO df
df <- read.delim(f_name)
# CALCULATE HEART RATE VARIABILITY WITH df
...
# OUTPUT df TO CSV
subject_id <- gsub("\\_.*", "", f_name)
write.csv(df, paste0(subject_id, "_RtoR.csv"))
# RETURN df FOR OTHER USES
return(df)
}
# LIST OF DATA FRAMES WITH CALCULATIONS
myhrvdata_list <-lapply(Sys.glob("C8**_HR.bdf.evt"), proc_heart_rate)
I have read multiple questionnaire files into DFs in R. Now I want to create new DFs based on them, buit with only specific rows in them, via looping over all of them.The loop appears to work fine. However the selection of the rows does not seem to work. When I try selecting with simple squarebrackts, i get the error "incorrect number of dimensions". I tried it with subet(), but i dont seem to be able to set the subset correctly.
Here is what i have so far:
for (i in 1:length(subjectlist)) {
p[i] <- paste("path",subjectlist[i],sep="")
files <- list.files(path=p,full.names = T,include.dirs = T)
assign(paste("subject_",i,sep=""),read.csv(paste("path",subjectlist[i],".csv",sep=""),header=T,stringsAsFactors = T,row.names=NULL))
assign(paste("subject_",i,"_t",sep=""),sapply(paste("subject_",i,sep=""),[c((3:22),(44:63),(93:112),(140:159),(180:199),(227:246)),]))
}
Here's some code that tries to abstract away the details and do what it seems like you're trying to do. If you just want to read in a bunch of files and then select certain rows, I think you can avoid the assign functions and just use sapply to read all the data frames into a list. Let me know if this helps:
# Get the names of files we want to read in
files = list.files([arguments])
df.list = sapply(files, function(file) {
# Read in a csv file from the files vector
df = read.csv(file, header=TRUE, stringsAsFactors=FALSE)
# Add a column telling us the name of the csv file that the data came from
df$SourceFile = file
# Select only the rows we want
df = df[c(3:22,44:63,93:112,140:159,180:199,227:246), ]
}, simplify=FALSE)
If you now want to combine all the data frames into a single data frame, you can do the following (the SourceFile column tells you which file each row originally came from):
# Combine all the files into a single data frame
allDFs = do.call(rbind, df.list)
I am new to R and I am practicing to write R functions. I have 100 cvs separate
data files stored in my directory, and each is labeled by its id, e.g. "1" to "100.
I like to write a function that reads some selected files into R, calculates the
number of complete cases in each data file, and arrange the results into a data frame.
Below is the function that I wrote. First I read all files in "dat". Then, using
rbind function, I read the selected files I want into a data.frame. Lastly, I computed
the number of complete cases using sum(complete.cases()). This seems straightforward but
the function does not work. I suspect there is something wrong with the index but
have not figured out why. Searched through various topics but could not find a useful
answer. Many thanks!
`complete = function(directory,id) {
dat = list.files(directory, full.name=T)
dat.em = data.frame()
for (i in id) {
dat.ful= rbind(dat.em, read.csv(dat[i]))
obs = numeric()
obs[i] = sum(complete.cases(dat.ful[dat.ful$ID == i,]))
}
data.frame(ID = id, count = obs)
}
complete("envi",c(1,3,5)) `
get error and a warning message:
Error in data.frame(ID = id, count = obs) : arguments imply differing number of rows: 3, 5
One problem with your code is that you reset obs to numeric() each time you go through the loop, so obs ends up with only one value (the number of complete cases in the last file in dat).
Another issue is that the line dat.ful = rbind(dat.em, read.csv(dat[i])) resets dat.ful to contain just the data frame being read in that iteration of the loop. This won't cause an error, but you don't actually need to store the previous data frames, since you're just checking the number of complete cases for each data frame you read in.
Here's a different approach using lapply instead of a loop. Note that instead of giving the function a vector of indices, this function takes a vector of file names. In your example, you use the index instead of the file name as the file "id". It's better to use the file names directly, because even if the file names are numbers, using the index will give an incorrect result if, for some reason, your vector of file names is not sorted in ascending numeric order, or if the file names don't use consecutive numbers.
# Read files and return data frame with the number of complete cases in each csv file
complete = function(directory, files) {
# Read each csv file in turn and store its name and number of complete cases
# in a list
obs.list = lapply(files, function(x) {
dat = read.csv(paste0(directory,"/", x))
data.frame(fileName=x, count=sum(complete.cases(dat)))
})
# Return a data frame with the number of complete cases for each file
return(do.call(rbind, obs.list))
}
Then, to run the function, you need to give it a directory and a list of file names. For example, to read all csv files in the current working directory, you can do this:
filesToRead = list.files(pattern=".csv")
complete(getwd(), filesToRead)
trying to create a function that looks up a bunch of CSV files in a directory and then, taking the file ID as an argument, outputs a table (actually data frame - new to R language) where there are 2 columns, one titled ID for the corresponding id parameter and the second column will be the count of rows in that file.
The files are all titled 001.csv - 322.csv
e.g. output would look like column title: ID, first record: 001 (derived from 001.csv), second column: title "count of rows", first record
The function looks like so: myfunction(directory,id)
Directory is the folder where the csv files are and id can be a number (or vector?) e.g. simply 1 or 9 or 100 or it can be a vector like so 200:300.
In the case of the later, 200:300, the output would be a table with 100 rows where the 1st row would be 200 with say 10 rows of data within it.
So far:
complete <- function(directory,id = 1:332) {
# create an object to help read the appropriate csv files later int he function
csvfilespath <- sprintf("/Users/gcameron/Desktop/%s/%03d.csv", directory, id)
colID <- sprintf('%03d', id)
# now, how do I tell R to create a table with 2 columns titled ID and countrows?
# Now, how would I take each instance of an ID and add to this table the id and count of rows in each?
}
I apologize if this seems really basic. The tutorial I'm on moves fast and I have watched each video lecture and done a fair amount of research too.
SO is by far my favourite resource and I learn better by using it. Perhaps because it's personalised and directly applicable to my immediate tasks. I hope my questions also benefit others who are learning R.
BASED ON FEEDBACK BELOW
I now have the following script:
complete <- function(directory,id = 1:332) {
csvfiles <- sprintf("/Users/gcameron/Desktop/%s/%03d.csv", directory, id)
nrows <- sapply( csvfiles, function(f) nrow(read.csv(f)))
data.frame(ID=id, countrows=sapply(csvfiles,function(x) length(count.fields(x)))
}
Does this look like I'm on the right track?
I'm receiving an error "Error: unexpected '}' in:
"data.frame(ID=id, countrows=sapply(csvfiles,function(x) length(count.fields(x)))
}"
I cannot see hwere the extra "}" is coming from?
data.frame(ID=id, countrows=sapply(csvfilepath, function(x) length(count.fields(x))))