Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are a large number of data.frame (more than 50). How can I save them quickly in .csv?
write.csv()
50 lines of code, it's awful...
Help me, guys!
If I understand the many data.frames may be available in in your R session...
First create a vector with the names of the data.frames... use ls or some thing similar. Then use get to get the R object after the names (the data.frames in this case)
myfiles <- ls ()
Then
for (d in myfiles) {
current <- get (d)
write.csv (current, filename = paste0 (d, ".csv"))
}
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have more than 400 image files in my loacl directory.I want to read these images in r for passing it through XG boost algorithm..My two tries(codes) are is given below
library("EBImage")
img <- readImage("/home/vishnu/Documents/XG_boost_R/Data_folder/*.jpg")
and
library(jpeg)
library(biOps)
myjpg <- readJpeg("/home/vishnu/Documents/XG_boost_R/Data_folder/*.jpg")
It is a bit hard to guess what you want to do exactly, but one way to accomplish loading a lot of files and processing them is via a for-loop like this:
files <- list.files() #create a vector with file names
for(i in 1:length(files)){#loop over file names
load(files[i]) #load .rda-file
#do some processing and save results
}
This structure is generalizable to other cases. Depending on what kind of files you want to load, you will have to replace load(files[i]) with the appropriate command, for instance load.image() from the imager package.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to write a function in R, since in other languages like c++ it works very slow. The function fills a 2d table with data, and then summarizes values of each row for further processing.
I am not sure if it answers your question, but if you work with data you can put them into data frames to take a look at the statistical parameters and for further processing. For example:
df = data.frame("var1" = c(5,10,15), "var2" = c(20,40,60))
#the 'summary' command gives you some statistical parameters based on the column
summary(df)
#with the 'apply' command you can addresses the rows.
#in this example you get the mean of each row:
apply(df, 1,mean)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have written the following code to compare Two Market, the code is working if we provide the Data Frame name individually.
enter image description here
for(i in 1:nrow(Market_SystemA))
{
A <- Market_SystemA[i,2]
B <- Market_SystemB[i,3]
MarketA <- data.frame(A)
MarketB <- data.frame(B)
#This is s fuction in R
Compare_Function(MarketA,MarketB)
}
I'm not sure if I understand your question correctly, but it seems like you are calling a compare_function on two strings that refer to existing data frames. To actually get the data frames from the string, then you will need to use the get function which looks for an object that has a name that matches the string.
MarketA <- get(A)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am quite new to R and it would be a big help to find a way to do this.
I have a list of values(just one Column and around 16000 values) and I have to break this list up into smaller packets of 1000 values each. Then save each list as a csv file.
Is there a way of doing this is R?
Thank you in advance,
Dgupta
Something like this:
data <- as.data.frame(list)
groups <- split(1:nrow(data), ceiling(seq_along(1:nrow(data)/1000))
for (i in 1:length(groups)){write.csv(data[groups[1,],file=paste(i,'csv'))}
You can split the vector up by using a colon symbol in the vector index. For example:
> x <- c(10,20,30,40)
> x[1:2]
[1] 10 20
you can then write a vector to a csv by using the write.csv()function. One way to approach this is to write loop that gets 1000 items at a time from the vector and writes it to a csv.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have very big .csv file, it's around a few GB.
I want to read first few thousand lines of it.
Is there any method to do this efficiently?
Use the nrows argument in read.csv(...)
df <- read.csv(file="my.large.file.csv",nrows=2000)
There is also a skip= parameter that tells read.csv(...) how many lines to skip before you start reading.
If your file is that large you might be better off using fread(...) in the data.table package. Same arguments.
If you're on UNIX or OS/X, you can use the command line:
head -n 1000 myfile.csv > myfile.head.csv
Then just read it in R like normal.