I want to export a data frame in a text file by R, but when I use the code as below, all the data in my output file become string.
write.table(docm, file = "DoCM.txt", sep ="\t" ,row.names = TRUE, col.names = TRUE)
here is my output data frame
How can I export my data frame without convert to the string?
thanks for any help
This could be achieved by setting quote=FALSE:
write.table(mtcars, file = "mtcars.txt", quote = FALSE, sep ="\t" ,row.names = TRUE, col.names = TRUE)
Related
I have a .csv file with 175 rows and 6 columns. I want to append a 176th row. My code is as follows:
x <- data.frame('1', 'ab', 'username', '<some.sentence>', '2017-05-04T00:51:35Z', '24')
write.table(x, file = "Tweets.csv", append = T)
What I expect to see is:
Instead, my result is:
How should I change my code?
write.table(x, file = "Tweets.csv", sep = ",", append = TRUE, quote = FALSE,
col.names = FALSE, row.names = FALSE)
df <- dir(getwd(), full.names = T) %>% map_df(~ read_excel(.x, col_names = TRUE))
write.csv(df, file = "mynewfile.csv", col.names = T, row.names = F, fileEncoding = "UTF8", quote = FALSE)
Unfortunately it is not encoded in UTF8, ö, 360° and such still have invalid characters
It works when I save it as write.xlsx but it doesn't work unfortunately when there are bigger amount of rows (like when I had 50k there were memory problem)
Thats how it look - it is a sample (after I've saved it as csv, opened and did text to columns to let it be in diff columns)
Any suggestions?
Lets say thats how my sample file looks like
df<-data.frame(A = c("ö","ö","ö"), B=c("360°", "360°", "360°"), C= c(123,123,123))
Try with encoding="UTF-8":
write.csv(df, file = "mynewfile.csv", col.names = T, row.names = F, encoding="UTF-8", quote = FALSE)
edit: your code is working with my RStudio: Have you set the correct options in your global options: see below. .. Default text encoding...
Let's consider this simple creation of a csv file with a dataframe that contains special characters:
d <- data.frame(x = "Édifice", y="Arrêt")
write.table(x = d, file = "test.csv", sep = ",", row.names = F, col.names = F, quote = F, fileEncoding = "UTF-8")
The csv file looks like expected
Édifice,Arrêt
But when I open this csv in excel I get:
I have tried using readr, collapsing columns and then writing them with writeLines, writing using write.xlsx, checked for encoding options. None worked.
My constraint is that the input is a dataframe, and the output must be a csv readable in excel.
Same problem with german umlaute. I use write_excel_csv from readr:
library(readr)
write_excel_csv(x = d, path = "test.csv", col_names = F)
I have a .csv file with 175 rows and 6 columns. I want to append a 176th row. My code is as follows:
x <- data.frame('1', 'ab', 'username', '<some.sentence>', '2017-05-04T00:51:35Z', '24')
write.table(x, file = "Tweets.csv", append = T)
What I expect to see is:
Instead, my result is:
How should I change my code?
write.table(x, file = "Tweets.csv", sep = ",", append = TRUE, quote = FALSE,
col.names = FALSE, row.names = FALSE)
I have some strings in one of the columns of my data frame that look like:
bem\\2015\black.rec
When I export the data frame into a text file using the next line:
write.table(data, file = "sample.txt", quote = FALSE, row.names = FALSE, sep = '\t')
Then in the text file the text looks like:
bem\2015BELblack.rec
Do you know an easy way to ignore all backslashes when writing the table into a text file so the backslashes are kept.
They way I have resolved this is converting backslashes into forwardslashes:
dataset <- read_delim(dataset.path, delim = '\t', col_names = TRUE, escape_backslash = FALSE)
dataset$columnN <- str_replace_all(dataset$Levfile, "\\\\", "//")
dataset$columnN <- str_replace_all(dataset$Levfile, "//", "/")
write.table(dataset, file = "sample.txt", quote = FALSE, row.names = FALSE, sep = '\t')
This exports the text imported as bem\\2015\black.rec with the required slashes: bem//2015/black.rec