CSV file is not recognized - r

Im exporting an excel file into a .csv file (cause I want to import it into R) but R doesn't recognize it.
I think this is because when I open it in notepad I get:
Item;Description
1;ja
2;ne
While a file which does not have any import issues is structured like this in notepad:
"Item","Description"
"1","ja"
"2","ne"
Does anybody know how I can either export it from excel in the right format or import a csv file with ";" seperator into R.

It's easy to deal with semicolon-delimited files; you can use read.csv2() instead of read.csv() (although be aware this will also use comma as the decimal separator character!), or specify sep=";".
Sorry to ask, but did you try reading ?read.csv ? The relevant information is in there, although it might admittedly be a little overwhelming/hard to sort out if you're new to R:
sep: the field separator character. Values on each line of the
file are separated by this character. If ‘sep = ""’ (the
default for ‘read.table’) the separator is ‘white space’,
that is one or more spaces, tabs, newlines or carriage
returns.

Related

Exporting tweets text with multiple lines into csv [duplicate]

I need to generate a file for Excel, some of the values in this file contain multiple lines.
there's also non-English text in there, so the file has to be Unicode.
The file I'm generating now looks like this: (in UTF8, with non English text mixed in and with a lot of lines)
Header1,Header2,Header3
Value1,Value2,"Value3 Line1
Value3 Line2"
Note the multi-line value is enclosed in double quotes, with a normal everyday newline in it.
According to what I found on the web this supposed to work, but it doesn't, at least not win Excel 2007 and UTF8 files, Excel treats the 3rd line as the second row of data not as the second line of the first data row.
This has to run on my customer's machines and I have no control over their version of Excel, so I need a solution that will work with Excel 2000 and later.
Thanks
EDIT: I "solved" my problem by having two CSV options, one for Excel (Unicode, tab separated, no newlines in fields) and one for the rest of the world (UTF8, standard CSV).
Not what I was looking for but at least it works (so far)
You should have space characters at the start of fields ONLY where the space characters are part of the data. Excel will not strip off leading spaces. You will get unwanted spaces in your headings and data fields. Worse, the " that should be "protecting" that line-break in the third column will be ignored because it is not at the start of the field.
If you have non-ASCII characters (encoded in UTF-8) in the file, you should have a UTF-8 BOM (3 bytes, hex EF BB BF) at the start of the file. Otherwise Excel will interpret the data according to your locale's default encoding (e.g. cp1252) instead of utf-8, and your non-ASCII characters will be trashed.
Following comments apply to Excel 2003, 2007 and 2013; not tested on Excel 2000
If you open the file by double-clicking on its name in Windows Explorer, everything works OK.
If you open it from within Excel, the results vary:
You have only ASCII characters in the file (and no BOM): works.
You have non-ASCII characters (encoded in UTF-8) in the file, with a UTF-8 BOM at the start: it recognises that your data is encoded in UTF-8 but it ignores the csv extension and drops you into the Text Import not-a-Wizard, unfortunately with the result that you get the line-break problem.
Options include:
Train the users not to open the files from within Excel :-(
Consider writing an XLS file directly ... there are packages/libraries available for doing that in Python/Perl/PHP/.NET/etc
After lots of tweaking, here's a configuration that works generating files on Linux, reading on Windows+Excel, though the embedded newline format is not according to the standard:
Newlines within a field need to be \n (and obviously quoted in double quotes)
End of record: \r\n
Make sure that you don't start a field with equals, otherwise it gets treated as a formula and truncated
In Perl, I used Text::CSV to do this as follows:
use Text::CSV;
open my $FO, ">:encoding(utf8)", $filename or die "Cannot create $filename: $!";
my $csv = Text::CSV->new({ binary => 1, eol => "\r\n" });
#for each row...:
$csv -> print ($FO, \#row);
Recently I had similar problem, I solved it by importing a HTML file, the baseline example would be like this:
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<style>
<!--
br {mso-data-placement:same-cell;}
-->
</style>
</head>
<body>
<table>
<tr>
<td>first line<br/>second line</td>
<td style="white-space:normal">first line<br/>second line</td>
</tr>
</table>
</body>
</html>
I know, it is not a CSV, and might work differently for various versions of Excel, but I think it is worth a try.
I hope this helps ;-)
In Excel 365 while importing the file:
Data -> From Text/CSV:
-> Select File > Transform Data:
In the Power Query Editor, right hand side at "Query Settings", under APPLIED STEPS, on "Source" row, click the "Settings icon"
-> In the line break dropdown select Ignore line breaks inside quotes.
Then press OK -> File -> Close & Load
It is worth noting that when a .CSV file has fields wrapped in double quotes which contain line breaks, Excel will not import the .CSV file properly if the .CSV file is written in UTF-8 format. Excel treats the line break as if it were CR/LF and begins a new line. The spreadsheet is garbled. That seems to be true even if semi-colons are used as field delimiters (instead of commas).
The problem can be resolved by using Windows Notepad to edit the .CSV file, using File > Save As... to save the file, and before saving the file, changing the file encoding from UTF-8 to ANSI. Once the file is saved in ANSI format, then I find that Microsoft Excel 2013 running on Windows 7 Professional will import the file properly.
Newline inside a value seems to work if you use semicolon as separator, instead of comma or tab, and use quotes.
This works for me in both Excel 2010 and Excel 2000. However, surprisingly, it works only when you open the file as a new spreadsheet, not when you import it into an existing spreadsheet using the data import feature.
On a PC, ASCII character #10 is what you want to place a newline within a value.
Once you get it into Excel, however, you need to make sure word wrap is turned on for the multi-line cells or the newline will appear as a square box.
This will not work if you try to import the file into EXCEL.
Associate the file extension csv with EXCEL.EXE so you will be able to invoke EXCEL by double-clicking the csv file.
Here I place some text followed by the NewLine Char followed by some more text AND enclosing the whole string with double quotes.
Do not use a CR since EXCEL will place part of the string in the next cell.
""text" + NL + "text""
When you invoke EXCEL, you will see this. You may have to auto size the height to see it all. Where the line breaks will depend on the width of the cell.
2
DATE
Here's the code in Basic
CHR$(34,"2", 10,"DATE", 34)
I found this and it has worked for me
$delimiter = ',';
$enc1 = '"';
$enc2 = '""';
Then where you need to have stuff enclosed
$myfile = ('/path/to/myfile.csv');
//erase any previous contents
$fp = fopen($myfile, 'w+');
fwrite($fp, $enc1 . 'Column Heading 1' . $enc1 . $delimiter );
//append to new file
$fp2 = fopen($myfile, 'a');
fwrite($fp2, $enc1 . 'Column Heading 2' . $enc1 . $delimiter );
.....
fwrite($fp2, $enc1 . 'Last Column Heading' . $enc1 . $delimiter. PHP_EOL );
Then when you need to write something out - like HTML that includes the " you can do this
fwrite($fp2, $enc2 . $myhtmlstring . $enc2 . $delimiter);
New lines end with . PHP_EOL
The end of the script prints out a link so that the user can download the file.
echo 'Click here to download file';
Test this:
It fully works for me:
Put the following lines in a xxxx.csv file
hola_x,="este es mi text1"&CHAR(10)&"I sigo escribiendo",hola_a
hola_y,="este es mi text2"&CHAR(10)&"I sigo escribiendo",hola_b
hola_z,="este es mi text3"&CHAR(10)&"I sigo escribiendo",hola_c
Open with excel.
in some cases will open directly otherwise will need to use column to data conversion.
expand the column width and hit the wrap text button. or format cells and activate wrap text.
and thanks for the other suggestions, but they did not work for me. I am in a pure windows env, and did not want to play with unicode or other funny thing.
This way you putting a formula from csv to excel. It may be many uses for this method of work.
(note the = before the quotes)
pd:In your suggestions please put some samples of the data not only the code.
UTF files that contain a BOM will cause Excel to treat new lines literally even in that field is surrounded by quotes. (Tested Excel 2008 Mac)
The solution is to make any new lines a carriage return (CHR 13) rather than a line feed.
putting "\r" at the end of each row actually had the effect of line breaks in excel, but in the .csv it vanished and left an ugly mess where each row was squashed against the next with no space and no line-breaks
For File Open only, the syntax is
,"one\n
two",...
The critical thing is that there is no space after the first ",". Normally spaces are fine, and trimmed if the string is not quoted. But otherwise nasty. Took me a while to figure that out.
It does not seem to matter if the line is ended \n or \c\n.
Make sure you expand the formula bar so you can actually see the text in the cell (got me after a long day...)
Now of course, File Open will not support UTF-8 Properly (unless one uses tricks).
Excel > Data > Get External Data > From Text
Can be set into UTF-8 mode (it is way down the list of fonts). However, in that case the new lines do not seem to work and I know no way to fix that.
(One might thing that after 30 years MS would get this stuff right.)
The way we do it (we use VB.Net) is to enclose the text with new lines in Chr(34) which is the char representing the double quotes and replace all CR-LF characters for LF.
Normally a new line is "\r\n". In my CSV, I replaced "\r" with empty value.
Here is code in Javascript:
cellValue = cellValue.replace(/\r/g, "")
When I open the CSV in MS Excel, it worked well. If a value has multiple lines, it will stay within 1 single cell in the Excel sheet.
you can do the next "\"Value3 Line1 Value3 Line2\"". It works for me generating a csv file in java
Here is an interesting approach using JavaScript ...
String.prototype.csv = String.prototype.split.partial(/,\s*/);
var results = ("Mugan, Jin, Fuu").csv();
console.log(results[0]=="Mugan" &&
results[1]=="Jin" &&
results[2]=="Fuu",
"The text values were split properly");
Printing a HTML newline <br/> into the content and opening in excel will work fine on any excel
You could use keyboard shortcut ALT+Enter.
Select the cell you wish to edit
enter edit mode either by double clicking it or pressing F2
3.Press Alt+enter. This will create a new line in cell

Import of special characters & NA's of csv into SAS does not work

I have a csv file with many "NA" values and with special characters such as ä, ö or ß. I want to import this csv file into SAS via proc import, but unfortunately I have two problems:
1) The NA's are read as characters and not as missing values
2) Special characters are changed automatically into something like #!+-~
When I import the csv into R I am able to solve both problems with the encoding "UTF-8" - NA's are recognized to be missings and special characters are displayed correctly. My idea was to export the file from R as dbf file and import this dbf file into SAS. This procedure solves the problem with the NA's, but however, special characters are again displayed in a wrong way. I also tried different encodings in SAS, but that also did not work. Any helpd is highly appreciated!
I would use a data step instead of proc import. It could look like:
Data MyCSV;
Infile "C:\MyName\ImportData.CSV"
Delimiter="," LRecL=1000 DSD Missover Firstobs=2; * Firstobs=2 to delete col-names;
Informat qty_txt $9. ; * 9 .. length in characters;
If qty_txt ^= "NA" Then qty=Input(qty_txt,Best15.); Drop qty_txt;
Run;
(If you're exporting from R set na="." in write.csv.)
Regarding the special characters problem, Define the variable as character in the informat-statement should work.

Avoid importing empty line breaks as \n

Some of the fields of an csv file I'd like to import contain text followed by an empty line break or two. As a result, when using read.csv2 to import thecsv file I obtain fields containing "[text] + \n".
I tried removing '\n' using gsub("[\n]", "", x) but this takes an awful lot of time. I was wondering whether I can simply avoid importing empty line breaks - then there will be no '\n' in my data. Using strip.white=TRUE does not work.
Any idea whether I can avoid importing empty line breaks?
The data saved in csv format, when opened with notepad, looks a bit like:
1;"text - text";1;Good
1;"text - text
";1;Good
2;"text - text";1;Good
2;"text - text";2;Good
3;"text - text";1;Good
My real dataset has much more columns. In many of the columns I have the '\n' problem.
To add some more info, this is how I import my data (in the example above I have no headers, but in reality I have headers):
read.csv2("data.csv", header = TRUE, stringsAsFactors=FALSE, strip.white=TRUE,
blank.lines.skip = TRUE)
Edit: as an easy/quick R solution might not be at hand, I tackled my with problem with an Excel macro (I recorded a macro when applying the 1st procedure described in https://www.ablebits.com/office-addins-blog/2013/12/03/remove-carriage-returns-excel/).

How to remove the extra commas from a csv file?

I was trying to use a csv file in R in read.transactions() command from arules package.
The csv file when opened in Notepad++ shows extra commas for every non-existing values. So, I'm having to manually delete those extra commas before using the csv in read.transactions(). For example, the actual csv file when opened in Notepad++ looks like:
D115,DX06,Slz,,,,
HC,,,,,,
DX06,,,,,,
DX17,PG,,,,,
DX06,RT,Dty,Dtcr,,
I want it to appear like below while sending it into read.transactions():
D115,DX06,Slz
HC
DX06
DX17,PG
DX06,RT,Dty,Dtcr
Is there any way I can make that change in read.transactions() itself, or any other way? But even before that, we don't get to see those extra commas in R(that output I showed was from Notepad++)..
So how can we even remove them in R when we can't see it?
A simple way to create a new file without the trailing commas is:
file_lines <- readLines("input.txt")
writeLines(gsub(",+$", "", file_lines),
"without_commas.txt")
In the gsub command, ",+$" matches one or more (+) commas (,) at the end of a line ($).
Since you're using Notepad++, you could just do the substitution in that program: Search > Replace, replace ,+$ with nothing, Search Mode=Regular Expression.

repair data in csv file

I have a huge csv file, separated by comma's and I want to do a analysis with glm in R.
In one column there exists data with a comma implied, something like: bla,blabla
When reading the file in R with read.csv.sql there comes a error-message:
RS-DBI driver: (RS_sqlite_import: ./agp.csv line 47612 expected 37 columns of data but found 38)
This is due to the 'extra' comma in some of the data, not the whole column has an extra column.
How can I fix this? I want to remove this extra superfluous comma.
Thanks for the reaction,
André
The CSV format is very simple and can easily be hand edited. In order to include a comma in a value, you must surround the value with quotes quotes. Try this: "bla,blabla". If that data happens to contain any quotes, eg. blah,"thequotedblah",blah, those quotes need to be escaped with another quote, like this: "blah,""thequotedblah"",blah".
Although there is no official standard around it, there isn't much to the CSV format. Wikipedia has a great CSV reference that I have personally used to implement CSV support in applications. Spend 5-10 minutes reading it and you'll know everything you ever need to know to manually create/read/repair CSV data.
Is it just this one line that contains a non-quoted comma - or are there several such lines? Editing the .csv with an editor that can handle large files (e.g. Ultraedit) to sanitize that one record would certainly help. Asaph's suggestion of quoting is also a good 'un.

Resources