Delimiters while writing csv files in R - r

How can I use |(pipe) as a delimiter while writing csv files in R?
When I try writing a data set into a file with write.csv with sep = "|", it ignores the separator and writes the file simply as a comma separated file.
Also write.csv2 also doesn't seem to cover the other variety of characters which could be used as a separator.
Is there a way to use other characters such as ^, $, ~, ¬ or |, as a delimiter while writing a csv file in R.
Thanks.

You have to understand that .csv means "comma-separated value" https://en.wikipedia.org/wiki/Comma-separated_values.
If you want to export with a separator using that characters you need another function.
For example, using write.table, and you'll be able to load this file with R, Excel,....
write.table(data, "data.txt", sep = "|")
data_load <- read.table("data.txt", sep = "|")
Feel free to use any character as separator.
Or you could force this plain text to be .csv
write.table(data, "data.csv", sep = "|")
data_load <- read.csv("data.csv", sep = "|")

This answer is just a variation of the one I gave for this question. They are similar, but I don't think the question itself is an exact duplicate, but they are both part of a bigger question (not yet asked).
In the help for write.table, it states:
write.csv and write.csv2 provide convenience wrappers for writing CSV files.
...
These wrappers are deliberately inflexible: they are designed to ensure that the correct conventions are used to write a valid file.
Attempts to change append, col.names, sep, dec or qmethod are ignored,
with a warning.
To set sep or another of these parameters you need to use write.table instead of write.csv.

Related

regarding the post processing of data files input using fread

I once saw the following code segment for reading and processing the csv file.
cd<-fread('traindata.csv',skip=4,head=F)[1:100,1:26]
X<-apply(cd[,,with=F],2,function(cc) as.numeric(sub(",", "", cc, fixed = TRUE)))
my understanding for the second one is that the author aims to replace separator "," with "". My question is that, when we read the csv file using fread, the separator "," in the original csv file should not be kept. Why we still need the second line of code to replace ",".

How to create a dataframe from csv file with texts separated by pipe I? [duplicate]

I have just received a data file, whose extension is "*.psv". After doing a bit of research, I don't know how to open it R.
We could use read.table to read *.psv file.
read.table("myfile.psv", sep = "|", header = FALSE, stringsAsFactors = FALSE)
There might be many different representations of psv file, but when it comes to data mining, I think it might be more about "pipe separated" file. The data in the file is separated by "|"

Why R cannot read this table while excel can?

I am trying to read a specific file that I have copied from an SFTP location. The file is pipe delimited. I can read the file in Excel. But R read is as null values and column names are being duplicated. I don't understand if this is an encoding issue? I am trying to create a bash script to automate this process. Any help? Below is the link for the data.
Here's file!
I have tried changing the Encoding. But without knowing which encoding I am struggling. I have tried using read_delim, ead_table, read.table, read_csv and read.csv. But no help.
this is the code I have used to read the file.
read_delim("./Engagement_Level.txt", delim = "|")
I would like to read it as a data frame.
The issue is that the file encoding is UTF-16LE, which read_delim cannot read at present.
You could use the base read.delim and file() to specify the encoding:
read.delim(file("Engagement_Level.txt", encoding = "UTF-16LE"), sep = "|")
That will convert all the quoted numbers to numeric. If you'd rather they were type character, to deal with later:
read.delim(file("Engagement_Level.txt", encoding = "UTF-16LE"), sep = "|",
colClasses = "character")
I really recommend you to use Excel to build a CSV file using Data>Text in columns, this is not appropriate in this context but it's incredibly infallible and quickly.
Then use read.csv(file,sep=",").

Reading PSV (pipe-separated) file or string

I have just received a data file, whose extension is "*.psv". After doing a bit of research, I don't know how to open it R.
We could use read.table to read *.psv file.
read.table("myfile.psv", sep = "|", header = FALSE, stringsAsFactors = FALSE)
There might be many different representations of psv file, but when it comes to data mining, I think it might be more about "pipe separated" file. The data in the file is separated by "|"

How to write multiple xtabs in a CSV file?

For writing a single xtab, I have used:
write.csv(xtabData1, "analysis.csv")
For appending one more xtab to the same csv file, I tried:
write.csv(xtabData2, "analysis.csv", append=T)
But this throws a warning "attempt to set 'append' ignored" and overwrites the csv file.
One solution would be to join the tab data first using rbind, e.g.
write.csv(rbind(xtabData1, xtabData2), file="analysis.csv")
The append option is disabled in write.csv(). write.csv() is just a wrapper function for write.table(). Here;s more from the help file.
write.csv and write.csv2 provide convenience wrappers for writing CSV
files. They set sep and dec (see below), qmethod = "double", and
col.names to NA if row.names = TRUE (the default) and to TRUE
otherwise. write.csv uses "." for the decimal point and a comma for
the separator. write.csv2 uses a comma for the decimal point and a
semicolon for the separator, the Excel convention for CSV files in
some Western European locales. These wrappers are deliberately
inflexible: they are designed to ensure that the correct conventions
are used to write a valid file. Attempts to change append, col.names,
sep, dec or qmethod are ignored, with a warning.
Use write.table() instead (with sep="," and whatever other settings you'd like).

Resources