How to use a sequential combination of YAML parameters in Rmarkdown - r

How can I use a sequence of parameters, assigned using YAML in Rmarkdown? I am trying to write a csv file, but I can only use one param.
For instance. I have:
title: "My Title"
params:
month: "March"
person: "CEO"
extention: ".csv"
I want to add all of the assigned parameters as a single continuous word:
write.table(d, file=params$month params$person params$extention, quote = F, fileEncoding = "UTF-8", sep = ";", row.names=FALSE)
However, its not possible like that, it only reads one parameter if put like that.
What is the right way to do that?

Your file attribute has to be a string so you can use paste to put it all together as one string. (paste0 is the same as paste but doesn't put a separation between the variables by default)
write.table(d, file = paste0(params$month,params$person,params$extention),
quote = F, fileEncoding = "UTF-8", sep = ";", row.names = FALSE)
will give "MarchCEO.csv"
In your case you only had month for the name of the file, the other variables were not taken into account because of the space, so it thought it was some other attribute..
if you want the line to be more readable you can define the name first like that:
myfilename <- paste0(params$month,params$person,params$extention)
write.table(d, file = myfilename, quote = F, fileEncoding = "UTF-8",
sep = ";", row.names = FALSE)

Related

How can I write a txt file with multiple delimiters [R]?

I'm trying to write a txt file that has multiple delimiters:
text|||value_1|value_2||||
Currently, I've tried:
write.table(variable, "file.txt", sep = "|", quote = FALSE, row.names = FALSE)
But I'm not sure how to use multiple delimiters given that sep only takes in 1 argument. Will appreciate any help! Thank you!
Maybe you can write using one delimiter, read it back, and replace the one delimiter with whatever you want before writing again
write.table(mtcars, "tmp.txt", sep = "|", quote = FALSE)
d = readLines("tmp.txt")
d = gsub("|", "|||", d, fixed = TRUE)
writeLines(d, "tmp2.txt")

R: importing data from a CSV, the separators is "\"" but get error more columns than column names

I having a problem separating a csv file that interprets mutiple row inputs as:
[1] "bizname,addr,bizphone,numrevs" "Jersey Smoke,\""
[3] " 84 N Main St, Milltown, NJ 08850" " \",\""
[5] " (732) 253-7977" " \",\""
The first string is the headers within the file. Now the problem is that there is additional quotation marks within the data, and if I try to split the data based on the quotation marks using something like this:
vapeshopsnj1 <- read.csv("~/Desktop/newjerseyvapeshopsA.csv",
row.names = NULL, sep="\"", header = TRUE,
colClasses= "character", encoding= "utf-8")
There is an error that is produced which is:
Error in read.table(file = file, header = header, sep = sep, quote = quote,: more columns than column names
After two hours of google-ing and reading various stack-exchange posts, and R resources I have found the answer. If this saves you two hours one day, you can thank this post:
vapeshopsnj1 <- read.csv("~/Desktop/newjerseyvapeshopsA.csv", row.names = NULL, quote = "\"", header = TRUE, colClasses= "character", encoding= "utf-8")

Escaping backslashes when using write.table

I have some strings in one of the columns of my data frame that look like:
bem\\2015\black.rec
When I export the data frame into a text file using the next line:
write.table(data, file = "sample.txt", quote = FALSE, row.names = FALSE, sep = '\t')
Then in the text file the text looks like:
bem\2015BELblack.rec
Do you know an easy way to ignore all backslashes when writing the table into a text file so the backslashes are kept.
They way I have resolved this is converting backslashes into forwardslashes:
dataset <- read_delim(dataset.path, delim = '\t', col_names = TRUE, escape_backslash = FALSE)
dataset$columnN <- str_replace_all(dataset$Levfile, "\\\\", "//")
dataset$columnN <- str_replace_all(dataset$Levfile, "//", "/")
write.table(dataset, file = "sample.txt", quote = FALSE, row.names = FALSE, sep = '\t')
This exports the text imported as bem\\2015\black.rec with the required slashes: bem//2015/black.rec

Edit specific cells of a CSV file using R, without changing anything else

I want to create a function that changes specific cells of a given CSV file(s) and saves them back into the exact same format.
I have tried to read the CSV file and write it back with the edited values, but the only way to do it without misinterpreting commas is to quote everything.
Is there a way to edit the files without putting quotes for every single value?
My guess is that there is some other way to read and write the CSV files than read.table
p.s. The csv files I want to edit are not in values seperated by columns, but have rather vague format(there may be unquoted, quoted (mostly because of a comma in a string) strings and integers in the same column/row)
Here is the code I use:
editcell = function(id,
newvalue,
row,
col,
dbpath = "C:/data/",
add = FALSE
)
{
#READ
temptable = read.table(file = file(paste0(dbpath, id, ".csv")),
header = F,
sep = ",",
dec = ".",
fill = TRUE,
quote = "",
comment.char = "",
colClasses = "character",
# nrows = 400,
stringsAsFactors = F
);
cat("There are",nrow(temptable),"rows and",
ncol(temptable),"columns.");
#EDIT
if(add == TRUE) { temptable[x,y] = paste0(temptable[x,y],newvalue)}
else {temptable[x,y] = newvalue};
#WRITE
write.table(temptable,
file=paste0(dbpath, id,".csv"),
append = F,
quote = T,
sep=",",
eol = "\n",
na = "NA",
dec = ".",
row.names=F,
col.names=F,
qmethod = "double",
fileEncoding = ""
)
}

Printing several pieces of output to the same CSV in R?

I am using the TraMineR package. I am printing output to a CSV file, like this:
write.csv(seqient(sequences.seq), file = "diversity_measures.csv", quote = FALSE, na = "", row.names = TRUE)
write.csv(seqici(sequences.seq), file = "diversity_measures.csv", quote = FALSE, na = "", row.names = TRUE, append= TRUE)
write.csv(seqST(sequences.seq), file = "diversity_measures.csv", quote = FALSE, na = "", row.names = TRUE, append= TRUE)
The dput(sequences.seq) object can be found here.
However, this does not append the output properly but creates this error message:
In write.csv(seqST(sequences.seq), file = "diversity_measures.csv", :attempt to set 'append' ignored
Additionally, it only gives me the output for the last command, so it seems like it overwrites the file each time.
Is it possible to get all the columns in a single CSV file, with a column name for each (i.e. entropy, complexity, turbulence)
You can use append=TRUE in write.table calls and use the same file name, but you'll need to specify all the other arguments as needed. append=TRUE is not available for the wrapper function write.csv, as noted in the documentation:
These wrappers are deliberately inflexible: they are designed to
ensure that the correct conventions are used to write a valid file.
Attempts to change append, col.names, sep, dec or qmethod are ignored,
with a warning.
Or you could write out
write.csv(data.frame(entropy=seqient(sequences.seq),
complexity=seqici(sequences.seq),
turbulence=seqST(sequences.seq)),
'output.csv')

Resources