Reading csv file using R and RStudio - r

I am trying to read a csv file in R, but I am getting some errors.
This is what I have and also I have set the correct path
mydata <- read.csv("food_poisioning.csv")
But I am getting this error
Error in make.names(col.names, unique = TRUE) :
invalid multibyte string at '<ff><fe>Y'
In addition: Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
line 1 appears to contain embedded nulls
2: In read.table(file = file, header = header, sep = sep, quote = quote, :
line 2 appears to contain embedded nulls
I believe I am getting this error because my csv file is actually not separated by comma, but it has spaces. This is what is looks like:
I tried using sep=" ", but it didn't work.

If you're having difficulty using read.csv() or read.table() (or writing other import commands), try using the "Import Dataset" button on the Environment panel in RStudio. It is useful especially when you are not sure how to specify the table format or when the table format is complex.
For your .csv file, use "From Text (readr)..."
A window will pop up and allow you to choose a file/URL to upload. You will see a preview of the data table after you select a file/URL. You can click on the column names to change the column class or even "skip" the column(s) you don't need. Use the Import Options to further manage your data.
Here is an example using CreditCard.csv from Vincent Arel-Bundock's Github projects:
You can also modify and/or copy and paste the code in Code Preview, or click Import to run the code when you are ready.

Related

Trouble loading xls file into R with R Markdown

I'm trying to upload a GSS data set into R Markdown for creating a lecture presentation.
Each time I do, I get errors that I do not understand. Any help would be appreciated.
read.csv("directory/GSS2018.xls", headers = TRUE)
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
unused argument (headers = TRUE)
My .xls has headers, so I'm not sure why it is saying "untrue". Even still, I tried taking out the headers option and received this:
read.csv("~directory/GSS2018.xls")
line 1 appears to contain embedded nullsline 2 appears to contain embedded nullsline 3 appears to contain embedded nullsline 4 appears to contain embedded nullsline 5 appears to contain embedded nullsError in make.names(col.names, unique = TRUE) :
invalid multibyte string at '<1a>'
I can't quite get what this error is telling me, nor how to fix it. I can import my data just fine using the "Import Dataset" button on the environment sector of R Studio - but when I put that code into R markdown, it shows up all these errors.
Any help is appreciated!

Read only selected columns in R read.csv

I have encountered this strange xlsx file, which have a lot of empty columns, and when I swipe rightwards in Microsoft Excel, the empty columns kept appearing. I want to read this file using read.csv, but it gives me warning In read.table(file = file, header = header, sep = sep, quote = quote, : line 1 appears to contain embedded nulls, I am guessing this is because of the empty columns. How can I solve this and read it using read.csv? Thank you

Scopus_ReadCSV {CITAN} not working with csv file exported from Scopus

I am using Rstudio with R 3.3.1 on Windows 7 and I have installed CITAN package. I am trying to import bibliography entries from a CSV file that I exported from Scopus (as it is, untouched), choosing to export all available information.
This is the error that I get:
example <- Scopus_ReadCSV("scopus.csv")
Error in Scopus_ReadCSV("scopus.csv") : Column not found: `Source'.
In addition: Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
invalid input found on input connection 'scopus.csv'
2: In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on 'scopus.csv'
Column `Source' is there when I open the file, so I do not know why it says 'not found'.
Eventually I came into the following conclusions:
The encoding of the CSV file as exported from Scopus was UTF-8-BOM, which does not seem to be recognized from R when using Scopus_readCSV("file.csv") or read.table("file.csv", header = TRUE, sep = ",", fileEncoding = "UTF-8").
Although it is used an encoding type for the file from Scopus, there can be found some "strange" non-english characters which are not readable from the read function in R. (Mainly found this problem in names with special characters)
Solutions for those issues:
Open the CSV file with a notepad application like the Notepad++ and save the file with UTF-8 encoding to become readable for R as UTF-8.
When running the read function in R you will notice that it stops reading (e.g. in the 40th out of 200 registries). See where exactly it stopped and this way you can find the special character, by opening the CSV with the notepad, and then you can erase/change it as you wish in order to not have the same issue in R again.
Another solution that worked for me:
Open the file in Google Sheets, then download it from there again as a *.csv-file. R opens it correctly afterwards.

Importing multiple csv files with lappy

When i need to import multiple csv files I use:
Cluster <- lapply(dir(),read.csv)
Previously setting the working directoy of course, but somehow today it stopped working, and returning this error message:
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
no lines available in input
The only unusual thing i did was setting the Java directory manually so that way rJava can be loaded.
Any idea what happen?

Using read.csv when a data entry is a space (not blank!)

I am having a problem with using read.csv in R. I am trying to import a file that has been saved as a .csv file in Excel. Missing values are blank, but I have a single entry in one column which looks blank, but is in fact a space. Using the standard command that I have been using for similar files produces this error:
raw.data <- read.csv("DATA_FILE.csv", header=TRUE, na.strings="", encoding="latin1")
Error in type.convert(data[[i]], as.is = as.is[i], dec = dec, na.strings = character(0L)) :
invalid multibyte string at ' floo'
I have tried a few variations, adding arguments to the read.csv() command such as na.strings=c(""," ") and strip.white=TRUE, but these result in the exact same error.
It is a similar error to what you get when you use the wrong encoding option, but I am pretty sure this shouldn't be a problem here. I have of course tried manually removing the space (in Excel), and this works, but as I'm trying to write generic code for a Shiny tool, this is not really optimal.

Resources