R removes spaces in read.table - r

I came across some surprising behavior today that doesn't seem right to me. I have a CSV file with several columns, some numeric and some text. One of my text columns contains extra spaces between some words. When I read this file into R using read.csv (or more generally read.table), it removes the extra spaces. I am not talking about leading or trailing whitespace, but spaces inside the string.
I have looked through the docs and nowhere can I find an option to turn off this behavior. Surely there must be a way to tell R to read the data as it is and not remove these spaces. Or is there?

Related

Removing different words form a document using R console

I have managed to retrieve a text file but i want to remove different words. I have gone to read.table and have no clue how to use it to help me remove certain words. I have got 300 words and these are some of them. How can remove all these words using the R console? I have two files, one is sk.text which is a whole document and the other one is bash.txt that has got just words, so i want to remove all the words in sk.text that match the words given in bash.text.
with
within
without
work
worked
working
works
would
A simple way would be to use
gsub(paste0('\\b',
YOURVECTOROFWORDSTOREMOVE,
'\\b', collapse = '|'),'',YOURSTRING)
which replaces every occurence of the words in the vector surrounded by either end/beginning characters or whitespace with a single space.
but you might want to look at the tm package and work with a corpus object if you have many files like this. there you can remove the words you like simply with
tm_map(YOURCORPUS, removeWords, YOURVECTOROFWORDSTOREMOVE)

Treating "#" as a regular character when reading data

I'm almost certain this has been asked before but due to a certain social media app I drowning in unrelated search results.
So the data set that I'm importing contains actual "#", as in Apartment #404, and I'd like to if possible preserve the character but R thinks it's an end of line or something. At first it would bomb out on the first occurrence, then I set fill=TRUE and now it just ignores the rest of the line after that.
How does one instruct R to treat #'s as regular characters?
If you are not using "#" as a comment symbol in your data, you can use
read.table(..., comment.char="")
That should treat "#" like any other character.

Stray commas when importing CSV into R

I have a large CSV file (170k rows), which I'm importing into R. Each entry in the file is comma-delimited - however, in some of the columns (particularly those with a collection of URLs stuck together), there are commas in the strings. An example below:
Will Smith,25/09/68,null,male,08/10/14,450109,TRUE,http://commons.wikimedia.org/wiki/Special:FilePath/Will_Smith_2011,_2.jpg?width=300http://upload.wikimedia.org/wikipedia/commons/thumb/5/51/Will_Smith_2011,_2.jpg/200px-Will_Smith_2011,_2.jpghttp:.....
The added comma has a knock-on effect - it makes R (and Excel) think that it is a separate column, which then extends out over other columns and destroying the formatting. Given that there are roughly ~10% of the data affected, is there a quick way to get around this?
If the rule suggested by this limited example is to remove the commas that appear before underscores, then this succeeds:
gsub("[,][_]", "_", s)
Without some rule for when commas should be ignored, no.
If you have some consistant rule then use str_replace_all with regex to find the exceptions.
If you're the one making the csv I'd suggest you delimit with a different character.

Improperly formatted CSV, how to repair?

I have a csv, and each line reads as follows:
"http://www.videourl.com/video,video title,video duration,thumbnail,<iframe src=""http://embed.videourl.com/video"" frameborder=0 width=510 height=400 scrolling=no> </iframe>,tag 1,tag 2",,,,,,,,,,,,,,,,,,,,,,,,,,
Is there a program I can use to clean this up? I'm trying to import it to wordpress and map it to current fields, but it isn't functioning properly. Any suggestions?
Just use search and replace in this case. remove the commas at the end and then replace the remaining commas with ",".
Should anyone else have the same issue. Know that this solution will only work with data much like the example giving. If data has a lot of text and there are commas within the text that need kept. Then search replacing comma will not work. Using regex would be the next option and that can be done in Notepad ++
However I think the regex pattern depends on the data so not much point creating an example.
PHP could be used to explode each line also. Remove values that match a regex out of many i.e. URL, money. Then what is left could be (depending on the data again) just a block of text. That approach may not work if there are two or more columns with a lot of text

repair data in csv file

I have a huge csv file, separated by comma's and I want to do a analysis with glm in R.
In one column there exists data with a comma implied, something like: bla,blabla
When reading the file in R with read.csv.sql there comes a error-message:
RS-DBI driver: (RS_sqlite_import: ./agp.csv line 47612 expected 37 columns of data but found 38)
This is due to the 'extra' comma in some of the data, not the whole column has an extra column.
How can I fix this? I want to remove this extra superfluous comma.
Thanks for the reaction,
André
The CSV format is very simple and can easily be hand edited. In order to include a comma in a value, you must surround the value with quotes quotes. Try this: "bla,blabla". If that data happens to contain any quotes, eg. blah,"thequotedblah",blah, those quotes need to be escaped with another quote, like this: "blah,""thequotedblah"",blah".
Although there is no official standard around it, there isn't much to the CSV format. Wikipedia has a great CSV reference that I have personally used to implement CSV support in applications. Spend 5-10 minutes reading it and you'll know everything you ever need to know to manually create/read/repair CSV data.
Is it just this one line that contains a non-quoted comma - or are there several such lines? Editing the .csv with an editor that can handle large files (e.g. Ultraedit) to sanitize that one record would certainly help. Asaph's suggestion of quoting is also a good 'un.

Resources