Hi I was trying to read a csv file, I would like to get a vector
the inside of file like
head filetoread.csv
0610010K14Rik,0610011F06Rik,1110032F04Rik,1110034G24Rik,1500011B03Rik,1700019L03Rik,1700021K19Rik, blah,blah,...
in R session:
c <- read.csv("filetoread.csv")
> c
[1] X0610010K14Rik X0610011F06Rik X1110032F04Rik
...
> str(c)
'data.frame': 0 obs. of 2840 variables:
$ X0610010K14Rik : logi
$ X0610011F06Rik : logi
$ X1110032F04Rik : logi
$ X1110034G24Rik : logi
...
but I wanna something like:
> c
[1] "X0610010K14Rik" "X0610011F06Rik" "X1110032F04Rik" ...
str(c)
chr [1:2840] "X0610010K14Rik" "X0610011F06Rik" "X1110032F04Rik"...
We can use scan
scan("filetoread.csv", sep=',', what = "", quiet = TRUE)
#[1] "0610010K14Rik" "0610011F06Rik" "1110032F04Rik" "1110034G24Rik"
#[5] "1500011B03Rik" "1700019L03Rik" "1700021K19Rik" " blah" "blah"
Related
Hi I was trying to read a csv file, I would like to get a vector
the inside of file like
head filetoread.csv
0610010K14Rik,0610011F06Rik,1110032F04Rik,1110034G24Rik,1500011B03Rik,1700019L03Rik,1700021K19Rik, blah,blah,...
in R session:
c <- read.csv("filetoread.csv")
> c
[1] X0610010K14Rik X0610011F06Rik X1110032F04Rik
...
> str(c)
'data.frame': 0 obs. of 2840 variables:
$ X0610010K14Rik : logi
$ X0610011F06Rik : logi
$ X1110032F04Rik : logi
$ X1110034G24Rik : logi
...
but I wanna something like:
> c
[1] "X0610010K14Rik" "X0610011F06Rik" "X1110032F04Rik" ...
str(c)
chr [1:2840] "X0610010K14Rik" "X0610011F06Rik" "X1110032F04Rik"...
We can use scan
scan("filetoread.csv", sep=',', what = "", quiet = TRUE)
#[1] "0610010K14Rik" "0610011F06Rik" "1110032F04Rik" "1110034G24Rik"
#[5] "1500011B03Rik" "1700019L03Rik" "1700021K19Rik" " blah" "blah"
I am trying to import the data from this api https://api.ycombinator.com/companies/export.json?callback=true
and i am getting the following error:
Error in parse_con(txt, bigint_as_char) lexical error: invalid char in json text.
setupCompanies([{"name":"Parake
(right here) ------^
i thought the error was because of emoticons, so i downloaded the file as text and did the manual removal. It didnt work
Remove the ?callback=true from your URL, and it works without error:
aa <- jsonlite::fromJSON("https://api.ycombinator.com/companies/export.json")
str(aa)
# 'data.frame': 2055 obs. of 8 variables:
# $ name : chr "Parakey" "Dinesafe" "Pengram" "Demeanor.co" ...
# $ url : chr "http://parakey.com" "https://dinesafe.org" "http://pengramar.com" "https://demeanor.co" ...
# $ batch : chr "s2005" "s2018" "w2019" "s2018" ...
# $ vertical : chr NA "B2B" "Augmented Reality" "Media" ...
# $ description: chr "" "We crowdsource food poisoning reports and help detect and prevent outbreaks." "Pengram provides indoor navigation in augmented reality on your phone. " "Now part of thentwrk.com" ...
# $ dead : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
# $ has_ff : logi NA FALSE FALSE FALSE FALSE FALSE ...
# $ all_ff : logi NA FALSE FALSE FALSE FALSE FALSE ...
I'm guessing that the "callback API" is setting up the return value so that it is effectively a function call (i.e., setupCompanies(...)), not just data.
I have ASCII files with data separated by $ signs.
There are 23 columns in the data, the first row is of column names, but there is inconsistency between the line endings, which causes R to import the data improperly, by shift the data left-wise with respect to their columns.
Header line:
ISR$CASE$I_F_COD$FOLL_SEQ$IMAGE$EVENT_DT$MFR_DT$FDA_DT$REPT_COD$MFR_NUM$MFR_SNDR$AGE$AGE_COD$GNDR_COD$E_SUB$WT$WT_COD$REPT_DT$OCCP_COD$DEATH_DT$TO_MFR$CONFID$REPORTER_COUNTRY
which does not end with a $ sign.
First row line:
7215577$8135839$I$$7215577-0$20101011$$20110104$DIR$$$67$YR$F$N$220$LBS$20110102$CN$$N$Y$UNITED STATES$
Which does end with a $ sign.
My import command:
read.table(filename, header=TRUE, sep="$", comment.char="", header=TRUE, quote="")
My guess is that the inconsistency between the line endings causes R to think that the records have one column more than the header, thus making the first column as a row.names column, which is not correct. Adding the specification row.names=NULL does not fix the issue.
If I manually add a $ sign in the file the problem is solved, but this is infeasible as the issue occurs in hundreds of files. Is there a way to specify how to read the header line? Do I have any alternative?
Additional info: the headers change across different files, so I cannot set my own vector of column names
Create a dummy test file:
cat("ISR$CASE$I_F_COD$FOLL_SEQ$IMAGE$EVENT_DT$MFR_DT$FDA_DT$REPT_COD$MFR_NUM$MFR_SNDR$AGE$AGE_COD$GNDR_COD$E_SUB$WT$WT_COD$REPT_DT$OCCP_COD$DEATH_DT$TO_MFR$CONFID$REPORTER_COUNTRY\n7215577$8135839$I$$7215577-0$20101011$$20110104$DIR$$$67$YR$F$N$220$LBS$20110102$CN$$N$Y$UNITED STATES$",
file="deleteme.txt",
"\n")
Solution using gsub:
First read the file as text and then edit its content:
file_path <- "deleteme.txt"
fh <- file(file_path)
file_content <- readLines(fh)
close(fh)
Either add a $ at the end of header row:
file_content[1] <- paste0(file_content, "$")
Or remove $ from the end of all rows:
file_content <- gsub("\\$$", "", file_content)
Then we write the fixed file back to disk:
cat(paste0(file_content, collapse="\n"), file=paste0("fixed_", file_path), "\n")
Now we can read the file:
df <- read.table(paste0("fixed_", file_path), header=TRUE, sep="$", comment.char="", quote="", stringsAsFactors=FALSE)
And get the desired structure:
str(df)
'data.frame': 1 obs. of 23 variables:
$ ISR : int 7215577
$ CASE : int 8135839
$ I_F_COD : chr "I"
$ FOLL_SEQ : logi NA
$ IMAGE : chr "7215577-0"
$ EVENT_DT : int 20101011
$ MFR_DT : logi NA
$ FDA_DT : int 20110104
$ REPT_COD : chr "DIR"
$ MFR_NUM : logi NA
$ MFR_SNDR : logi NA
$ AGE : int 67
$ AGE_COD : chr "YR"
$ GNDR_COD : logi FALSE
$ E_SUB : chr "N"
$ WT : int 220
$ WT_COD : chr "LBS"
$ REPT_DT : int 20110102
$ OCCP_COD : chr "CN"
$ DEATH_DT : logi NA
$ TO_MFR : chr "N"
$ CONFID : chr "Y"
$ REPORTER_COUNTRY: chr "UNITED STATES "
I have a data file that contains several character variables that only consist of numbers. They need to remain character variables as some of them start with a 0 and when converting to integer/numeric, the leading zeros are cut-off. For some strange reason, when I use fwrite to save my data file as csv and then open it again with fread, the character variables that only consisted of numbers are suddenly integer variables. How can I keep R from doing this?
> str(Dataset_Master)
Classes ‘data.table’ and 'data.frame': 12178669 obs. of 4 variables:
$ Date_of_goods_arrival_at_the_customer: int 20160527 20160527 20160527...
$ Sales_document : chr "0505399186" "0505435949"...
$ Warehouse : chr "8150" "8150" "8150" "8150" ...
$ Sold_to_country : chr "DE" "DE" "DE" "DE" ...
- attr(*, ".internal.selfref")=<externalptr>
> ##Save document
> fwrite(Dataset_Master, "Dataset_Master_3.csv")
> ##Load data
> Dataset_Master <- fread("Dataset_Master_3.csv")
|--------------------------------------------------|
|==================================================|
> str(Dataset_Master)
Classes ‘data.table’ and 'data.frame': 12178669 obs. of 4 variables:
$ Date_of_goods_arrival_at_the_customer: int 20160527 20160527 20160527...
$ Sales_document : int 505399186 505435949 505435949...
$ Warehouse : int 8150 8150 8150 8150 8150 8150...
$ Sold_to_country : chr "DE" "DE" "DE" "DE" ...
- attr(*, ".internal.selfref")=<externalptr>
I am trying to find a way to retrieve data from Harvard Dataverse website through R. I am using "dataverse" and "dvn" packages, among others. Many of the data files end with ".tab", although they are not formatted as normal tab-delimited text.
I have done this:
library(dataverse)
## 01. Using the dataverse server and making a search
Sys.setenv("DATAVERSE_SERVER" ="dataverse.harvard.edu")
## 02. Loading the dataset that I chose, by url
doi_url <- "https://doi.org/10.7910/DVN/ZTCWYQ"
my_dataset <- get_dataset(doi_url)
## 03. Grabbing the first file of the dataset
## which is named "001_AppendixC.tab"
my_files <- my_dataset$files$label
my_file <- get_file(my_files[1], doi_url)
AppendixC <- tempfile()
writeBin(my_file, AppendixC)
read.table(AppendixC)
> Error in scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
> line 1 did not have 2 elements
> In addition: Warning message:
> In read.table(AppendixC) :
> line 1 appears to contain embedded nulls
Any hint?
The problem is that dataverse::get_file() returns the file in a raw binary format. The easiest way to load it into memory is to write it to a tempfile with writeBin() and then read that file with the appropriate import/read function.
Here is a function that should automagically read it into memory:
# Uses rio, which automatically chooses the appropriate import/read
# function based on file type.
library(rio)
install_formats() # only needs to run once after
# pkg installation
load_raw_file <- function(raw, type) {
match.arg(
arg = type,
choices = c(
"csv", "tab", "psc", "tsv", "sas7bdat",
"sav", "dta", "xpt", "por", "xls", "xlsx",
"R", "RData", "rda", "rds", "rec", "mtb",
"feather", "csv.gz", "fwf"
)
)
tmp <- tempfile(fileext = paste0(".", type))
writeBin(as.vector(raw), tmp)
out <- import(tmp)
unlink(tmp)
out
}
Let's try it out with your file, which is a an excel file.
library(dataverse)
raw <- get_file(
"001_AppendixC.tab",
"https://doi.org/10.7910/DVN/ZTCWYQ"
)
data <- load_raw_file(raw, "xlsx")
And look at the data:
str(data)
> 'data.frame': 132 obs. of 17 variables:
> $ Country : chr "Afghanistan" "Albania" "Algeria" "Angola" ...
> $ UN_9193 : chr "37.4" "7.7" "9.1" "65.400000000000006" ...
> $ UN_9901 : chr "46.1" "7.2" "10.7" "50" ...
> $ UN_0709 : chr "24.6" "9.6999999999999993" "7.5" "23.7" ...
> $ UN_1416 : chr "23" "4.9000000000000004" "4.5999999999999996" "14" ...
> $ stu90_94 : chr "51.3" "37.200000000000003" "22.9" "52.9" ...
> $ stu98_02 : chr "54.7" "39.200000000000003" "23.6" "47.1" ...
> $ stu06_10 : chr "51.3" "23.1" "13.2" "29.2" ...
> $ stu12_16 : chr "40.9" "17.899999999999999" "11.7" "37.6" ...
> $ wast90_94: chr "11.5" "9.4" "7.1" "7.9" ...
> $ wast98_02: chr "13.4" "12.2" "3.1" "8.6999999999999993" ...
> $ wast06_10: chr "8.9" "9.4" "4.0999999999999996" "8.1999999999999993" ...
> $ wast12_16: chr "9.5" "6.2" "4.0999999999999996" "4.9000000000000004" ...
> $ UM1992 : chr "16.8" "3.7" "4.5" "22.6" ...
> $ UM2000 : chr "13.7" "2.6" "4" "21.7" ...
> $ UM2008 : chr "11" "1.8" "2.9" "19.2" ...
> $ UM2015 : chr "9.1" "1.4" "2.6" "15.7" ...