I have an excel sheet which has formulas in one column like C=(A-32)/1.8. if i read using function read_excel it is showing the error as unexpected symbol in column. Need help in reading this.
I think you need to force each column type with the argument col_types = of the function read_excel() in the package readxl. You can specify the type character which should read the cells as they are.
Related
I previously read string/factor column from csv file into R using multiplication symbol (×) and R recognised it. After my recent update, R stops recognising it. My previous code was
read.csv("Histology4.csv", stringsAsFactors = TRUE)
I would to keep multiplication symbol (×) in my dataframe. Does anyone has any suggestion? Thanks in advance!
column in csv file
afer reading to R!
Regards
I'm trying to use ggplot2 on a large data set stored into a csv file. I used to read it with excel.
I don't know how to convert this data into a data.frame. In particular, I have a date column that has the following format: "2020/04/12:12:00". How can I get R to understand this format ?
If it's a csv, you can use:
fread function from data.table. This will be the fastest way to read your csv.
read_csv or read_csv2 (for ; delimited documents) in readr package
If it's .xls (or .xlsx) document, have a look at the readxl package.
All these functions import your data as data.frames (with additional classes like data.table for fread or tibble for read_csv).
Edit
Given your comment, it looks like your file is not an excel but a csv. If you want to convert a column type to date, assuming your dataframe is called df
df[, dates := as.POSIXct(get(colnames(df)[1]), format = "%Y/%m/%d:%H:%M")]
Note that you don't need to use cbind or even reassign the data.table because you use := operator
As the message is saying you, you don't need the extra-precision of POSIXlt
Going by the question alone, I would suggest the openxlsx package, it has helped me reduce the time significantly in reading large datasets. Three points you may find it to be helpful based on your question and the comments
The read command stays same as xlsx package, however would suggest you to use openxlsx::read.xslx(file_path)
the arguments are again same, but in the place of sheetIndex it is sheet and it takes only numbers
If the existing columns are converted to character, then a simple as.Date would work
Given this CSV file:
How to read a file so that the extra commas that are not a part of data are excluded?
Seems that the file is ok. Have you tried the correct options for arguments in your importing function?
Would you like to try read_delim() from the readr package?
The basic format for scan function in R to read a file with characters is represented like this
a<- scan(file.choose(),what='char',sep=',').
I have a csv file with names as a separate column. Can i use what='char' in read.csv. If yes, how to use. If not how to read names column?
There is an entire R manual on importing and exporting data
https://cran.r-project.org/doc/manuals/r-release/R-data.html
read.table (or more specifically read.csv, which is read.table with the default separator being a comma) are the functions you are looking for.
a <- read.csv(yourfile)
I should generate a big excel spreadsheet with XLConnect. I am filling each column in this spreadscheet with my calculation and at the end I am writing my calculation in the spreadscheet:
writeWorksheetToFile(file=FileName,mtr,startRow=1,startCol=strcol,sheet="Sheet1",header=FALSE,rownames=FALSE)
but if I a open the excel file I can only see until AMJ coloumn. Is there a way, that I see my total column or the count of columns in XLSX file is limited?
I am not sure Kaja about this, because I use a convenience wrapper for XLConnect, but all I need to provide as arguments are the object I want to print to file (for you, "mtr") and the filename, such as perhaps "mtr.print.xlsx".
Why do you need to specify the startRow? Also, perhaps your startCol argument only leaves the AMJ column? Have you tried omitting it?
In short, only all the R object and the filename and see what happens.