I'm developing an application in R shiny. I'm reading SAS XPT files using read.xport function and creating data tables. I use Labels of the data table columns to display choices in a checkboxGroupInput. It works well for all SAS datasets but few(I get the error : string 1 is invalid in this locale) and when I researched I found out that some of the column labels have special characters, for example here is a column(label) CNTRYLNM(Country, Long Name�)
When I display the label with unname(label(datatable)), it comes as "Country, Long Name\xa0" other column labels have \xa1 at the end.
How do I get rid of these while reading from xpt itself or afterwards?
Related
Am trying to compute mean for grouped variables for an imported CSV file using group_by/summerise but only the first variable(column) returns mean but the rest of columns return NAs, When I check variable types using str(Cluster_kpis), datatype for the columns show numeric yet R claims they are not numeric. Where could be the issue. codes are below and screenshot of the data sourceenter image description here
enter image description here
Issue was my CSV file I imported to R required me to format all variable columns to "NUMBER" before importing to R. This is not the best solution but for now, it works!
I am trying to scrape from a 276-page PDF available here: https://www.acf.hhs.gov/sites/default/files/documents/ocse/fy_2018_annual_report.pdf
Not only is the document very long but it also has tables in different formats. I tried using the extract_tables() function in the tabulizer library. This successfully scrapes the data tables beginning on page 143 of the document but does not work for the tables on pages 18-75. Are these pages unscrapable? If so why?
I get error messages that say "more columns than column names" and "duplicate 'row.names' are not allowed"
child_support_scrape <- extract_tables(
file = "C:/Users/Jenny/Downloads/OCSE_2018_annual_report.pdf",
method = "decide",
output = "data.frame")
As texts in pdf files are not stored in plain text format. It is generally hard to extract text from a pdf file. The following method provide an alternative method to extract the table from the pdf. It requires the pdftools and plyr package.
# Download the pdf file as a variable in R
pdf_text <- pdftools::pdf_text("https://www.acf.hhs.gov/sites/default/files/documents/ocse/fy_2018_annual_report.pdf")
# Focus on the table in page 22
pdf_text22 <- strsplit(pdf_text[[22]], "\n")[[1]]
# Reformat the table using "regular expression"
pdf_text22 <- strsplit(pdf_text22, " {2,100}")
# Convert the table in a data frame
pdf_text22 <- plyr::rbind.fill(lapply(pdf_text22, function(x) as.data.frame(t(matrix(x)))))
Additional formatting may be required to beautify the data frame.
I have 2 columns of data, in table loaded from Excel: ID, BeforeConv
containing number and text, respectively.
After loading this data from Excel to R studio i made some tm_map based text cleaning on column BeforeConve copy applied to the new column in RStudio: AfterConv
This is my code:
x <- read_excel(file.choose())
corpus<-Corpus(VectorSource(x$BeforeConv))
corpus<-tm_map(corpus,tolower)
x$AfterConv<-data.frame(text = sapply(corpus, as.character)) //<-here in R studio data displayed correctly yet
write.csv(x=x, file="R2test.csv") //<-now the data displaced
my issue is that while in R studio i see the conversion correct(see image below)
...when exporting to either CSV or XLSX that last column (AfterConv) displays totally messy data (see below). What i did wrong and how to fix it?
Best if i could see the proper code to convert last column correctly to export it correctly along with two exiting columns to Excel or CSV.
Using openxlsx read.xlsx to import a dataframe from a multi-class column. The desired result is to import all values as strings, exactly as they're represented in Excel. However, some decimals are represented as very long floats.
Sample data is simply an Excel file with a column containing the following rows:
abc123,
556.1,
556.12,
556.123,
556.1234,
556.12345
require(openxlsx)
df <- read.xlsx('testnumbers.xlsx', )
Using the above R code to read the file results in df containing these string
values:
abc123,
556.1,
556.12,
556.12300000000005,
556.12339999999995,
556.12345000000005
The Excel file provided in production has the column formatted as "General". If I format the column as Text, there is no change unless I explicitly double-click each cell in Excel and hit enter. In that case, the number is correctly displayed as a string. Unfortunately, clicking each cell isn't an option in the production environment. Any solution, Excel, R, or otherwise is appreciated.
*Edit:
I've read through this question and believe I understand the math behind what's going on. At this point, I suppose I'm looking for a workaround. How can I get a float from Excel to an R dataframe as text without changing the representation?
Why Are Floating Point Numbers Inaccurate?
I was able to get the correct formats into a data frame using pandas in python.
import pandas as pd
test = pd.read_excel('testnumbers.xlsx', dtype = str)
This will suffice as a workaround, but I'd like to see a solution built in R.
Here is a workaround in R using openxlsx that I used to solve a similar issue. I think it will solve your question, or at least allow you to format as text in the excel files programmatically.
I will use it to reformat specific cells in a large number of files (I'm converting from general to 'scientific' in my case- as an example of how you might alter this for another format).
This uses functions in the openxlsx package that you reference in the OP
First, load the xlsx file in as a workbook (stored in memory, which preserves all the xlsx formatting/etc; slightly different than the method shown in the question, which pulls in only the data):
testnumbers <- loadWorkbook(here::here("test_data/testnumbers.xlsx"))
Then create a "style" to apply which converts the numbers to "text" and apply it to the virtual worksheet (in memory).
numbersAsText <- createStyle(numFmt = "TEXT")
addStyle(testnumbers, sheet = "Sheet1", style = numbersAsText, cols = 1, rows = 1:10)
finally, save it back to the original file:
saveWorkbook(testnumbers,
file = here::here("test_data/testnumbers_formatted.xlsx"),
overwrite = T)
When you open the excel file, the numbers will be stored as "text"
I am attempting to read data from the National Health Interview Survey in R: http://www.cdc.gov/nchs/nhis/nhis_2011_data_release.htm . The data is Sample Adult. The SAScii library actually has a function read.SAScii whose documentation has an example for the same data set I would like to use. The issue is it "doesn't work":
NHIS.11.samadult.SAS.read.in.instructions <-
"ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Program_Code/NHIS/2011/SAMADULT.sas"
NHIS.11.samadult.file.location <-
"ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NHIS/2011/samadult.zip"
#store the NHIS file as an R data frame!
NHIS.11.samadult.df <-
read.SAScii (
NHIS.11.samadult.file.location ,
NHIS.11.samadult.SAS.read.in.instructions ,
zipped = T, )
#or store the NHIS SAS import instructions for use in a
#read.fwf function call outside of the read.SAScii function
NHIS.11.samadult.sas <- parse.SAScii( NHIS.11.samadult.SAS.read.in.instructions )
#save the data frame now for instantaneous loading later
save( NHIS.11.samadult.df , file = "NHIS.11.samadult.data.rda" )
However, when running it I get the error Error in toupper(SASinput) : invalid multibyte string 533.
Others on Stack Overflow with a similar error, but for functions such as read.delim and read.csv, have recommended to try changing the argument to fileEncoding="latin1" for example. The problem with read.SAScii is it has no such parameter fileEncoding.
See:
R: invalid multibyte string and Invalid multibyte string in read.csv
Just in case anyone has a similar problem, the issue and solution for me was to run options( encoding = "windows-1252" ) right before running the above code for read.SAScii since the ASCII file is meant for use in SAS and therefore on Windows. And I am using Linux.
The author of the SAScii library actually has another Github repository asdfree where he has working code for downloading CDC-NHIS datasets for all available years as well as as many other datasets from various surveys such as the American Housing Survey, FDA Drug Surveys, and many more.
The following links to the author's solution to the issue in this question. From there, you can easily find a link to the asdfree repository: https://github.com/ajdamico/SAScii/issues/3 .
As far as this dataset goes, the code in https://github.com/ajdamico/asdfree/blob/master/National%20Health%20Interview%20Survey/download%20all%20microdata.R#L8-L13 does the trick, however it doesn't encode the columns as factors or numeric properly. The good thing is that for any given dataset in an NHIS year, there are only about less than ten to twenty numeric columns where encoding these as numeric one by one is not so painful, and encoding the rest of the columns as numeric requires only a loop through the non-numeric columns.
The easiest solution for me, since I only require the Sample Adult dataset for 2011, and I was able to get my hands on a machine with SAS installed, was to run the SAS program included at http://www.cdc.gov/nchs/nhis/nhis_2011_data_release.htm to encode the columns as necessary. Finally, I used proc export to export the sas dataset onto a CSV file which I then opened in R easily with no necessary edits to the data except in dealing with missing values.
In case you want to work with NHIS datasets besides Sample Adult, it is worth noting that when I ran the available SAS program for 2010 "Sample Adult Cancer" (http://www.cdc.gov/nchs/nhis/nhis_2010_data_release.htm) and exported the data to a CSV, there was an issue with having less column names than actual columns when I attempted to read in the CSV file in R. Skipping the first line resolves this issue but you lose the descriptive column names. You can however import this same data easily without encoding with the R code in the asdfree repository. Please read the documentation there for more info.