I am creating columns of variables.
myVars=paste0("var",rep(1:5))
myVars
paste0(myVars,"=rnorm(5)")
output:
"var1=rnorm(5)" "var2=rnorm(5)" "var3=rnorm(5)" "var4=rnorm(5)"
"var5=rnorm(5)"
note the second quote should be after var1 as seen below.
I also want to paste in the comma seen in wanted output.
That should require something like paste0(A,B,C)
Want:
"var1"=rnorm(5), "var2"=rnorm(5), "var3"=rnorm(5), "var4"=rnorm(5),
"var5"=rnorm(5)
If we need to have double quotes around 'myVars', use dQuote with q = FALSE to avoid having the fancyquotes
out <- paste0(dQuote(myVars, q = FALSE), "=rnorm(5)")
cat(out, '\n')
#"var1"=rnorm(5) "var2"=rnorm(5) "var3"=rnorm(5) "var4"=rnorm(5) "var5"=rnorm(5)
if it should be a single string
out1 <- paste(dQuote(myVars, q = FALSE), "=rnorm(5)", sep="", collapse=", ")
cat(out1, '\n')
#"var1"=rnorm(5), "var2"=rnorm(5), "var3"=rnorm(5), "var4"=rnorm(5), "var5"=rnorm(5)
Related
I basically need the outcome (string) to have double quotations, thus need of escape character. Preferabily solving with R base, without extra R packages.
I have tried with squote, shQuote and noquote. They just manipulate the quotations, not the escape character.
My list:
power <- "test"
myList <- list (
"power" = power)
I subset the content using:
myList
myList$power
Expected outcome (a string with following content):
" \"power\": \"test\" "
Using package glue:
library(glue)
glue(' "{names(myList)}": "{myList}" ')
"power": "test"
Another option using shQuote
paste(shQuote(names(myList), type = "cmd"),
shQuote(unlist(myList), type = "cmd"),
sep = ": ")
# [1] "\"power\": \"test\""
Not sure to get your expectation. Is it what you want?
myList <- list (
"power" = "test"
)
stringr::str_remove_all(
as.character(jsonlite::toJSON(myList, auto_unbox = TRUE)),
"[\\{|\\}]")
# [1] "\"power\":\"test\""
If you want some spaces:
x <- stringr::str_remove_all(
as.character(jsonlite::toJSON(myList, auto_unbox = TRUE)),
"[\\{|\\}]")
paste0(" ", x, " ")
Here Replace multiple strings in one gsub() or chartr() statement in R? it is explained to replace multiple strings of one character at in one statement with gsubfn(). E.g.:
x <- "doremi g-k"
gsubfn(".", list("-" = "_", " " = ""), x)
# "doremig_k"
I would however like to replace the string 'doremi' in the example with ''. This does not work:
x <- "doremi g-k"
gsubfn(".", list("-" = "_", "doremi" = ""), x)
# "doremi g_k"
I guess it is because of the fact that the string 'doremi' contains multiple characters and me using the metacharacter . in gsubfn. I have no idea what to replace it with - I must confess I find the use of metacharacters sometimes a bit difficult to udnerstand. Thus, is there a way for me to replace '-' and 'doremi' at once?
You might be able to just use base R sub here:
x <- "doremi g-k"
result <- sub("doremi\\s+([^-]+)-([^-]+)", "\\1_\\2", x)
result
[1] "g_k"
Does this work for you?
gsubfn::gsubfn(pattern = "doremi|-", list("-" = "_", "doremi" = ""), x)
[1] " g_k"
The key is this search: "doremi|-" which tells to search for either "doremi" or "-". Use "|" as the or operator.
Just a more generic solution to #RLave's solution -
toreplace <- list("-" = "_", "doremi" = "")
gsubfn(paste(names(toreplace),collapse="|"), toreplace, x)
[1] " g_k"
This is basically the R equivalent of this question.
I have a list of mixed elements:
l = list(-1, "quicksort", NULL)
And I want to turn it into a string:
string = '-1, "quicksort", NULL'
But I can't figure out how to easily keep the quotes inside the string without putting ALL elements in quotes:
paste(l, collapse = ", ") # WRONG
# "-1, quicksort, NULL"
paste(shQuote(l), collapse = ", ") # WRONG
# '"-1", "quicksort", "NULL"'
I have a solution, but it seems clumsy:
paste(lapply(l, function(x) if(class(x) == "character") shQuote(x) else x),
collapse=", ")
# '-1, "quicksort", NULL'
Is there a simpler (i.e. no if statement) solution?
deparse() the list and then remove the unwanted characters.
gsub("list|[()]", "", deparse(l))
# [1] "-1, \"quicksort\", NULL"
My preferred solution ended up being
paste(lapply(l, deparse), collapse = ", ")
Which bypasses the need for gsub stuff and supports any type of list element. I think it's a bit more readable too.
Using knitr and Rstudio, I'm trying to print a dataframe to HTML or Word, so that the leading whitespaces in versicolor, will push versicolor to the right.
#data
library(knitr )
library(xtable)
df <- iris[c(1,51),c(5,1)]
df$Species <- as.character(df$Species)
df$Species[ df$Species=="versicolor"] <- " versicolor"
Trying different combinations of kable()...
#table
kable( df)
kable( df, right = FALSE,align = c("l", "l" ) )
kable( df, right = FALSE,align = c("r", "l" ) )
I get this:
...or this:
But I'm trying to get this:
If you're willing to muck with some HTML:
df$Species[ df$Species=="versicolor"] <-
"<code style='background:white'> </code>versicolor"` will get you something like you want
or
df$Species[ df$Species=="versicolor"] <-
"<span style='padding-left:30px'> versicolor</span>"
will get you left-space-padding.
The latter might even be cleaner programmatically (inserting multiples of # in padding-left.
You can try adding
df$Species <- gsub(" ", " ", df$Species, fixed=TRUE)
before creating the table, that will change all your spaces before versicolor to HTML non-breaking spaces.
Separate the leading spaces from the trailing text with one gsub. Then, globally replace those spaces with a second gsub. Lastly, combine the two parts with paste.
As a one-liner:
paste0(gsub('\\s', ' ', gsub('^(\\s*)\\S.*', '\\1', df$Species)), gsub('^\\s*(\\S.*)', '\\1', df$Species))
If df$Species <- " versicolor", the result is : " versicolor"
If df$Species <- " one two three", the result is : " one two three"
Or, reformatted for clarity:
x <- df$Species
paste0( # Combine edited text
gsub('\\s', ' ', # Replace leading spaces
gsub('^(\\s*)\\S.*', '\\1', x) # Extract leading spaces
),
gsub('^\\s*(\\S.*)', '\\1', x) # Extract trailing text
)
What about using kableExtra package? See "Row indentation" here:
https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html
Basically,
library(kableExtra)
dt <- mtcars[1:5, 1:6]
kable(dt) %>%
kable_styling("striped", full_width = F) %>%
add_indent(c(1, 3, 5))
I am using R to do some data pre-processing, and here is the problem that I am faced with: I input the data using read.csv(filename,header=TRUE), and then the space in variable names became ".", for example, a variable named Full Code became Full.Code in the generated dataframe. After the processing, I use write.xlsx(filename) to export the results, while the variable names are changed. How to address this problem?
Besides, in the output .xlsx file, the first column become indices(i.e., 1 to N), which is not what I am expecting.
If your set check.names=FALSE in read.csv when you read the data in then the names will not be changed and you will not need to edit them before writing the data back out. This of course means that you would need quote the column names (back quotes in some cases) or refer to the columns by location rather than name while editing.
To get spaces back in the names, do this (right before you export - R does let you have spaces in variable names, but it's a pain):
# A simple regular expression to replace dots with spaces
# This might have unintended consequences, so be sure to check the results
names(yourdata) <- gsub(x = names(yourdata),
pattern = "\\.",
replacement = " ")
To drop the first-column index, just add row.names = FALSE to your write.xlsx(). That's a common argument for functions that write out data in tabular format (write.csv() has it, too).
Here's a function (sorry, I know it could be refactored) that makes nice column names even if there are multiple consecutive dots and trailing dots:
makeColNamesUserFriendly <- function(ds) {
# FIXME: Repetitive.
# Convert any number of consecutive dots to a single space.
names(ds) <- gsub(x = names(ds),
pattern = "(\\.)+",
replacement = " ")
# Drop the trailing spaces.
names(ds) <- gsub(x = names(ds),
pattern = "( )+$",
replacement = "")
ds
}
Example usage:
ds <- makeColNamesUserFriendly(ds)
Just to add to the answers already provided, here is another way of replacing the “.” or any other kind of punctation in column names by using a regex with the stringr package in the way like:
require(“stringr”)
colnames(data) <- str_replace_all(colnames(data), "[:punct:]", " ")
For example try:
data <- data.frame(variable.x = 1:10, variable.y = 21:30, variable.z = "const")
colnames(data) <- str_replace_all(colnames(data), "[:punct:]", " ")
and
colnames(data)
will give you
[1] "variable x" "variable y" "variable z"