Execute multiple sets of lines from another R file - r

I asked this before, but maybe I didn't ask exactly enough.
I want to run from my Master-R file other, quite long R files. On the first glimpse that's easy to accomplish with source().
The point is, they are so long, that I don't want to run all of them, just a certain part of it. Someone on my former post showed me this hidden gem, but the both run from point A to point B.
What I want is to run from my file another file, starting at line x, then run to line x+z, skip a certain amount of rows, and then continue to run the same file from line y to y+z.
The solution in the link I attached is working and great, but I can't skip rows (This coding is above my skill), without creating another funtion and setting more start- and endpoints.
Is it possible to call something like this source(df.R, excludeLine(1:6, 20, 30:end)?

Just slightly modifying this very excellent answer: should work.
sourcePartial <- function(fn,startTag1='#from here1',endTag1='#to here1', startTag2='#from here2',endTag2='#to here2') {
lines <- scan(fn, what=character(), sep="\n", quiet=TRUE)
st1<-grep(startTag1,lines)
en1<-grep(endTag1,lines)
st2<-grep(startTag2,lines)
en2<-grep(endTag2,lines)
tc <- textConnection(lines[c((st1+1):(en1-1),(st2+1):(en2-1))])
source(tc)
close(tc)
}
But really, just have a go yourself next time and you might learn...

Related

Looping through nc files in R

Good morning everyone,
I am currently using the code written by Antonio Olinto Avila-da-Silva on this link: https://oceancolor.gsfc.nasa.gov/forum/oceancolor/topic_show.pl?tid=5954
It allows me to extract data of type sst/chlor_a from nc file. It uses a loop to create an excel file with all the data. Unfortunately, I noticed that the function only takes the first data file in the loop. Thus, I find myself with 20 times the same data in a row in my excel file.
Does anyone have a solution to make this loop work properly?
I would first check out that these two lines contain all the files you are expecting:
(f <- list.files(".", pattern="*.L3m_MO_SST_sst_9km.nc",full.names=F))
(lf<-length(f))
And then there's a bug in the for-loop. This line:
data<-nc_open(f)
Needs to reference the iterator i, so change it to something like this:
data<-nc_open(f[[i]])
It appears both scripts have this same bug.

Writing results from multiple runs of an R script to a single output csv file

I have written an R script that is to be used as part of a shell script based pipeline which will feed dozens of files containing genetic sequence data to the R script one after the other (using args[]).
I am having trouble finding a way to write the results of each run of this script to a single results file. I thought that the easiest way to do this might be to create an empty results.csv table and then ask the script to write to the next row of this file each time it is run (saves the problem of the script writing straight over the file on each run). In this vein a friend helped me out with the following code:
x<-readLines("results.csv")
if(x[[1]]==""){x[[1]]<-paste("meancoscore", "meanboot", "CIres", "RIres", "RC", "nodecount", sep= ",")}
x[[length(x)+1]]<-paste(meancoscore, meanboot, CIres, RIres, RC, nodecount, sep = ",")
x<-data.frame(x)
write.table(x,"results.csv", row.names = F, col.names = F, sep = ",")
In the above code "meancoscore", "meanboot", "CIres", "RIres", "RC", and "nodecount" are first used as a header if the data frame has nothing on the first row.
Following this the results (objects: meancoscore, meanboot, CIres, RIres, RC and nodecount are written in the columns corresponding with their headers. The idea here is that if you run the R script again with different source files it should simply write the results to the next line in the results.csv file.
However, the following is seen in the results.csv file after three runs of this code with different input files:
"\""\\""meancoscore,meanboot,CIres,RIres,RC,nodecount\\""\""
""\""\\""0.000,76.3247863247863,0.721002252252252,0.983235214508053,0.708914804154032,117\\""\""
""\""0.845,77.6923076923077,0.723259762308998,0.983410513459875,0.711261254217159,117\""
""0.85,77.4358974358974,0.728886344116805,0.983878381369061,0.717135516451654,117"
Where my desired result would be the following:
meancoscore,meanboot,CIres,RIres,RC,nodecount
0.000,76.3247863247863,0.721002252252252,0.983235214508053,0.708914804154032,117
0.845,77.6923076923077,0.723259762308998,0.983410513459875,0.711261254217159,117
0.85,77.4358974358974,0.728886344116805,0.983878381369061,0.717135516451654,117
It is worth noting that each successive fun seems to be adding more backslashes and more quotation marks to the results.csv file.
Ideally I would like to be able to simply read in the results.csv file when it is done and analyse the data by accessing the columns with results$meanboot, or summary(results$meanboot) for example.
Could anyone offer some advice on how to modify the above code or offer an alternative solution?
I should add here that I purposefully did not go for the option of writing into the R script a loop that will run through the input files of interest and simply assemble a full table of results as an object (I am aware that this would be very simple to write out). This was because the work being done by this script will be farmed out to multiple machines in a cluster.
Thank you for your time and any help you might be able to offer.
The problem was solved by adding quote = FALSE to the write.table() call as per voidHead's suspicion.

Read SPecific lines of a CSV file in R-language

I am trying to write a code which manipulates data from a particular .csv and writes the data to another one.
I want to read each line one by one and perform the operation.
Also I am trying to read a particular line from the .csv but what I am getting is that line and the lines before it.
I am a beginner in R-Language, so I find the syntax a bit confusing.
testconn<=file("<path>")
num<-(length(readLines(testconn)))
for(i in 1:num){
num1=i-1
los<=read.table(file="<path>",sep=",",head=FALSE,skip=num1,nrows=1)[,c(col1,col2)]
write.table(los,"<path>",row.names=FALSE,quote=FALSE,sep=",",col.names=FALSE,append=TRUE)
}
This is the code I am currently using, thought it is giving the desiored output but it is extremely slow, my .csv data file has 43200 lines.
Your code doesn't work. You confuse the comparison operator <= and the assignment one <-
Your code is is extremly innefficient. You call both read.table and write.table 43200 times to read/write a single file.
You can simply do this:
los<- read.table(file="<path>",sep=",")[,c(col1,col2)]
res <- apply(los,1,function(x){## you treat your line here}
write.table(res,"<path_write>",row.names=FALSE,
quote=FALSE,sep=",",col.names=FALSE)

R: Improving workflow and keeping track of output

I have what I think is a common enough issue, on optimising workflow in R. Specifically, how can I avoid the common issue of having a folder full of output (plots, RData files, csv, etc.), without, after some time, having a clue where they came from or how they were produced? In part, it surely involves trying to be intelligent about folder structure. I have been looking around, but I'm unsure of what the best strategy is. So far, I have tackled it in a rather unsophisticated (overkill) way: I created a function metainfo (see below) that writes a text file with metadata, with a given file name. The idea is that if a plot is produced, this command is issued to produce a text file with exactly the same file name as the plot (except, of course, the extension), with information on the system, session, packages loaded, R version, function and file the metadata function was called from, etc. The questions are:
(i) How do people approach this general problem? Are there obvious ways to avoid the issue I mentioned?
(ii) If not, does anyone have any tips on improving this function? At the moment it's perhaps clunky and not ideal. Particularly, getting the file name from which the plot is produced doesn't necessarily work (the solution I use is one provided by #hadley in 1). Any ideas would be welcome!
The function assumes git, so please ignore the probable warning produced. This is the main function, stored in a file metainfo.R:
MetaInfo <- function(message=NULL, filename)
{
# message - character string - Any message to be written into the information
# file (e.g., data used).
# filename - character string - the name of the txt file (including relative
# path). Should be the same as the output file it describes (RData,
# csv, pdf).
#
if (is.null(filename))
{
stop('Provide an output filename - parameter filename.')
}
filename <- paste(filename, '.txt', sep='')
# Try to get as close as possible to getting the file name from which the
# function is called.
source.file <- lapply(sys.frames(), function(x) x$ofile)
source.file <- Filter(Negate(is.null), source.file)
t.sf <- try(source.file <- basename(source.file[[length(source.file)]]),
silent=TRUE)
if (class(t.sf) == 'try-error')
{
source.file <- NULL
}
func <- deparse(sys.call(-1))
# MetaInfo isn't always called from within another function, so func could
# return as NULL or as general environment.
if (any(grepl('eval', func, ignore.case=TRUE)))
{
func <- NULL
}
time <- strftime(Sys.time(), "%Y/%m/%d %H:%M:%S")
git.h <- system('git log --pretty=format:"%h" -n 1', intern=TRUE)
meta <- list(Message=message,
Source=paste(source.file, ' on ', time, sep=''),
Functions=func,
System=Sys.info(),
Session=sessionInfo(),
Git.hash=git.h)
sink(file=filename)
print(meta)
sink(file=NULL)
}
which can then be called in another function, stored in another file, e.g.:
source('metainfo.R')
RandomPlot <- function(x, y)
{
fn <- 'random_plot'
pdf(file=paste(fn, '.pdf', sep=''))
plot(x, y)
MetaInfo(message=NULL, filename=fn)
dev.off()
}
x <- 1:10
y <- runif(10)
RandomPlot(x, y)
This way, a text file with the same file name as the plot is produced, with information that could hopefully help figure out how and where the plot was produced.
In terms of general R organization: I like to have a single script that recreates all work done for a project. Any project should be reproducible with a single click, including all plots or papers associated with that project.
So, to stay organized: keep a different directory for each project, each project has its own functions.R script to store non-package functions associated with that project, and each project has a master script that starts like
## myproject
source("functions.R")
source("read-data.R")
source("clean-data.R")
etc... all the way through. This should help keep everything organized, and if you get new data you just go to early scripts to fix up headers or whatever and rerun the entire project with a single click.
There is a package called Project Template that helps organize and automate the typical workflow with R scripts, data files, charts, etc. There is also a number of helpful documents like this one Workflow of statistical data analysis by Oliver Kirchkamp.
If you use Emacs and ESS for your analyses, learning Org-Mode is a must. I use it to organize all my work. Here is how it integrates with R: R Source Code Blocks in Org Mode.
There is also this new free tool called Drake which is advertised as "make for data".
I think my question belies a certain level of confusion. Having looked around, as well as explored the suggestions provided so far, I have reached the conclusion that it is probably not important to know where and how a file is produced. You should in fact be able to wipe out any output, and reproduce it by rerunning code. So while I might still use the above function for extra information, it really is a question of being ruthless and indeed cleaning up folders every now and then. These ideas are more eloquently explained here. This of course does not preclude the use of Make/Drake or Project Template, which I will try to pick up on. Thanks again for the suggestions #noah and #alex!
There is also now an R package called drake (Data Frames in R for Make), independent from Factual's Drake. The R package is also a Make-like build system that links code/dependencies with output.
install.packages("drake") # It is on CRAN.
library(drake)
load_basic_example()
plot_graph(my_plan)
make(my_plan)
Like it's predecessor remake, it has the added bonus that you do not have to keep track of a cumbersome pile of files. Objects generated in R are cached during make() and can be reloaded easily.
readd(summ_regression1_small) # Read objects from the cache.
loadd(small, large) # Load objects into your R session.
print(small)
But you can still work with files as single-quoted targets. (See 'report.Rmd' and 'report.md' in my_plan from the basic example.)
There is package developed by RStudio called pins that might address this problem.

R - How to create array from console input

Hi all and thanks in advance for all your help.
In R, I'm sending a command to an external Windows program using system(command), which in turn outputs lines (with multiple values per line) that I see directly on the R console. They look something like this:
a,b,c,d,e,f,g,h
1,2,3,4,5,6,7,8
3,4,5,7,1,3,4,9
7,5,3,1,8,1,5,7
What I would like to do is create an array that has the top row as column names and each subsequent row from the input should be the values that go into these columns. Any and all help in making this work would be very appreciated.
This is my first foray into this territory so I'm quite stuck as to how to do it. I've meddled with scan(), pipe() and readLines() but haven't been able to succeed. I have no particular attachment to system(command), any function that will run the executable that will give me the output I need is fine by me if it helps achieve what I want.
The comment made by user1935457 did the trick.
read.table(text = system(command, intern=TRUE), sep = ",", header=TRUE)

Resources