I'm using xtable to convert an R table into an object, which can then be printed as a LaTeX table. In this table, I need single numbers to be printed in different colors. Therefore, I already substituted these numbers before applying the xtable function using
paste0("\\textcolor{black!25!green}{",x.tab[[4]][i],"}",sep="")
where x.tab is my R table.
The .tex file looks fine and is compiling without any error. Still, I need these colored numbers, that are captured in a string now, with a decimal comma. All non-colored numbers are in the right format, because I use
format.args = list(big.mark = ".", decimal.mark = ",")
in my print function.
Any help is much appreciated!
In my case, using options(OutDec = ",") does the trick.
Use prettyNum function to set your format :
x <- c(1.6, 2.8, 1.3)
y <- c(0.067, 8.1, 7.55)
d <- data.frame(x, y)
print(xtable(prettyNum(d,decimal.mark=",")))
Related
In the example below, when I save the number in a character format (i.e., .10) as a CSV file using data.table's fwrite, Excel displays it as a number rather than a character string.
Is there a way to save the number in a character format that Excel can recognize as a string?
I'm not simply trying to remove the 0 before the decimal point or keep the trailing 0.
I'd like to keep the character strings intact, exactly as they would be displayed in R (e.g., .10 but without the quotation marks).
dt <- data.table(a = ".10")
fwrite(dt, "example.csv")
# The saved CSV file opened in Excel shows the number 0.1, rather than the string .10
Excel is mostly brain-dead with regards to reading things in as you want. Here's one workaround:
dat <- data.frame(aschr = c("1", "2.0", "3.00"), asnum = 1:3, hascomma = "a,b", hasnewline = "a\nb", hasquote = 'a"b"')
notnumber <- sapply(dat, function(z) is.character(z) & all(is.na(z) | !grepl("[^-0-9.]", z)))
needsquotes <- sapply(dat, function(z) any(grepl('[,\n"]', z))) & !notnumber
dat[needsquotes] <- lapply(dat[needsquotes], function(z) dQuote(gsub('"', '""', z), FALSE))
dat[notnumber] <- lapply(dat[notnumber], function(z) paste0("=", dQuote(z, FALSE)))
fwrite(dat, "iris.csv", quote = FALSE)
Resulting in the following perspective in Excel.
Most of this is precautionary: if you know your data well and you know that none of the data contains commas, quotes, or newlines, then you can do away with the needsquotes portions. notnumber is the column(s) of interest.
Bottom line, we "trick" excel into keeping it a string by forcing it as an excel formula. This may not work with other spreadsheets (e.g., Calc), I haven't tested.
I am trying to output a dataframe in R to a .txt file. I want the .txt file to ultimately mirror the dataframe output, with columns and rows all aligned. I found this post on SO which mostly gave me the desired output with the following (now modified) code:
gene_names_only <- select(deseq2_hits_table_df, Gene, L2F)
colnames(gene_names_only) <- c()
capture.output(
print.data.frame(gene_names_only, row.names=F, col.names=F, print.gap=0, quote=F, right=F),
file="all_samples_comparison_gene_list.txt"
)
The resultant output, however, does not align negative and positive values. See:
I ultimately want both positive and negative values to be properly aligned with one another. This means that -0.00012 and 4.00046 would have the '-' character from the prior number aligned with the '4' of the next character. How could I accomplish this?
Two other questions:
The output file has a blank line at the beginning of the output. How can I change this?
The output file also seems to put far more spaces between the left column and the right column than I would want. Is there any way I can change this?
Maybe try a finer scale treatment of the printing using sprintf and a different format string for positive and negative numbers, e.g.:
> df = data.frame(x=c('PICALM','Luc','SEC22B'),y=c(-2.261085123,-2.235376098,2.227728912))
> sprintf('%15-s%.6f',df$x[1],df$y[1])
[1] "PICALM -2.261085"
> sprintf('%15-s%.6f',df$x[2],df$y[2])
[1] "Luc -2.235376"
> sprintf('%15-s%.7f',df$x[3],df$y[3])
[1] "SEC22B 2.2277289"
EDIT:
I don't think that write.table or similar functions accept custom format strings, so one option could be to create a data frame of formatted strings and the use write.table or writeLines to write to a file, e.g.
dfstr = data.frame(x=sprintf('%15-s', df$x),
y=sprintf(paste0('%.', 7-1*(df$y<0),'f'), df$y))
(The format string for y here is essentially what I previously proposed.) Next, write dfstr directly:
write.table(x=dfstr,file='filename.txt',
quote=F,row.names=F,col.names=F)
I am trying to print an apostrophe for the column name below in a table using tableGrob
"Kendall's~tau"
The end result is that the whole label is italicized without the ~ and tau being interpreted:
How do I correctly specify this?
I don't think it's helpful, but this is the theme that I've specified to tableGrob:
table_theme <- ttheme_default(
core = list(fg_params=list(fontsize = 6)),
colhead = list(fg_params=list(fontsize = 6, parse=TRUE)),
rowhead = list(fg_params=list(fontsize = 6, parse=TRUE)),
padding = unit(c(2, 3), "mm"))
The column name is interpreted via plotmath in grDevices -- the standard way of specifying mathematical annotation in figures generated in R.
Again it has nothing to do with how to specify the expression itself, but here is the table constructor:
tableGrob(stats_df,
theme = table_theme,
rows = c("Kendall's~tau"))
Here's a reproducible example:
library(gridExtra)
library(grid)
data(iris)
table_theme <- ttheme_default(rowhead = list(fg_params=list(parse=TRUE)))
grid.table(head(iris),
rows = c(letters[c(1:4)], "plotmath~works~omega", "Kendall's~tau"),
theme = table_theme)
This works:
library(gridExtra)
library(grid)
data(iris)
table_theme <- ttheme_default(rowhead = list(fg_params=list(parse=TRUE)))
grid.table(head(iris),
rows = c(letters[c(1:4)], "plotmath~works~omega", "Kendall's"~tau),
theme = table_theme)
Try with a backslash in your expression like "Kendall\'s~tau". It should work then.
I tried to use apostrophe with expressions to plot in ggplot. In my database is invalid to use ' in expressions, but this worked
expression(paste(u,"'",(t),sep=""))
But this "paste" also cause bad behaviour of the subindex expression expression(U[0]). So to use both together, this one worked
paste(expression(paste("u","'",sep="")),"/U[0]",sep="")
If anyone knows a easier way, I'd be very glad.
If your apostrophe is embedded in a longer string, along with special symbols like ~, these solutions will not work. The only thing I've found to work is using regular expression substitution.
#This doesn't work
stringWithApostrophe="Matt's and Louise's diner~...and~also Ben's diner~X^2"
qplot(1:10,1:10)+annotate("text",x=2,y=4,label=stringWithApostrophe,parse=T)
Error: "Error in parse(text = text[[i]]) : :1:5: unexpected string constant"
The problem is the special characters like tilde and apostrophe happening in the same quoted segment. So you have to separate "Matt's" from "Louise's" and "~". Here's the code to do that.
stringWithApostrophe2<-stringr::str_replace_all(pattern = "([^~()*]*'[^~()*']*)",replacement = "\"\\1\"",string=stringWithApostrophe)
qplot(1:10,1:10)+annotate("text",x=2,y=8,hjust=0,label=stringWithApostrophe2,parse=T)
Plots successfully. The final expression that plotmath in R parses correctly is:
""Matt's and Louise's diner"~...and~"also Ben's diner"~X^2"
I'm trying to write a dashboard with shinydashboard in R to display some values using renderValueBox and valueBoxOutput. These values are not hardcoded but are being scraped from another source daily.
These values are currency numbers and should be reporting like $XXX,XXX.XX but instead I see XXXXXX.XX. Is there a way, like a wrapper, to easily format those values? Otherwise I've thought of brute forcing some regex on it with gsub...but ew. Please and thanks :)
Discovered the function prettyNum(): this function is amazing for simple conversion to comma separated numerics.
> prettyNum(56789, big.mark = ",")
> 56,789
Another way is to use the {scales} package and the dollar_format() function.
This function is a labelling function factory, in the sense that it creates other functions.
I usually need to output numbers in euros, so I defined the following function:
euro_format <- scales::dollar_format(
prefix = "\u20ac", # the euro symbol
suffix = "",
big.mark = ",",
decimal.mark = ".",
accuracy = 1
)
>euro_format(20842)
[1] "€20,842"
I've used read.table to read a file that contains numbers such as 0.00001
when I write them back with write.table those numbers appear as 1e-5
How can I keep the old format?
I would just change the scipen option before calling write.table. Note that this will also change how numbers are displayed when printing to the console.
options(scipen=10)
write.table(foo, "foo.txt")
options(scipen=0) # restore the default
You can do this by converting your numbers to strings with formatting as you require, then using the argument quote = FALSE in the call to write.table.
dfr <- data.frame(x = 10^(0:15))
dfr$y <- format(dfr$x, scientific = FALSE)
write.table(dfr, file = "test.txt", quote = FALSE)
Note that you shouldn't need to change the format of the numbers in your file. Pretty much every piece of scientific software and every spreadsheet understands scientific notation for numbers, and also has number formatting options so you can view them how you choose.
If the input is a mixture of scientific notation and explicit notation numbers, then you will be writing your own parser to read in the numbers and keep track of which ones were in which formats. In fact, you'll want to keep a string representation of those numbers lying around so you can write back exactly what was in the input.
However, if you just want to write.table() with consistently explicit notation, try.
write.table(format(_your_table_here_, scientific=FALSE), ...)
For maximum control loop over all rows and print them to a text file formatted with sprintf
# Find number of rows in data.frame test
nrows <- dim(test)[1]
# init a new vector
mylines <- vector("character",dim(test)[1])
# loop over all rows in dataframe
for(i in 1:nrows){
# Print out exactly the format you want
mylines[i] <- sprintf("Line %d: %.2f\t%.2f",1,test[i,"x"],test[i,"y")
}
# write lines to file
writeLines(mylines,"out.txt")