Retaining numerical data from csv file - r

I am trying to import a csv dataset which is about the number of benefit recipients per month and district. The table looks like this:
There are 43 variables (months) and 88 observations (districts).
Unfortunately, when I import the dataset with the following code:
D=read.csv2(file="D.csv", header=TRUE, sep=";", dec=".")
all my numbers get converted to characters.
I tried the as.is=T argument, and to use read.delim, as suggested by Sam in this post: Imported a csv-dataset to R but the values becomes factors
but it did not help.
I also tried deleting the first two columns in the original csv file to get rid of the district names (which is the only real non-numeric column) but I stil get characters in the imported data frame. Can you please help how I could retain my numerics?

Related

Importing multing multiple JSON files in R single dataframe

Hei,
I want to import JSON files from a folder to R data frame (as a single matrix). I have about 40000 JSON files with one observation each and different variable sizes.
I tried following codes
library(rjson)
jsonresults_all <- list.files("mydata", pattern="*.json", full.names=TRUE)
myJSON <- lapply(jsonresults_all, function(x) fromJSON(file=x))
myJSONmat <- as.data.frame(myJSON)
I want my data frame something like (40000 observations (rows) and some 175 variables (column) with some variable values NA.
But I get a single row containing each observation appended to the right.
Many thanks for your suggesion.

Data import from excel to R (all columns are character classed)

I'm new to r and really need some help with an assignment i have for school.
So I've created an xls file containing returns for companies as decimals i.e 0.023 (2.3% return)
Data is in 3 columns with some negative values. titles for each column in the first row. No row names present just 130 observations of returns and the company names (column names) at the top. All the cells are formatted to general
I converted the xls file to csv on my mac so the file type became CSV-UTF-8 (comma delimited).
When i try to create a dataset in r I imported the csv using read.table command:
read.table(”filename.csv”, header = TRUE, sep =”;” row.names=null)
The dataset looks good all the individual numbers in the right place but when I try
Sapply(dataset, class)
All columns are character. I've tried as.numeric and it says list object cannot be coerced to type ’double’
The issue comes from the fact that you imported a dataset with commas and R cannot interpret this as numeric (it requires dot as decimals separator).
Two ways to avoid this :
You import as you did and you convert your dataframe
dataset=apply(apply(dataset, 2, gsub, patt=",", replace="."), 2, as.numeric)
You directly import the dataset by intepreting commas as decimals separator with read.csv2
library(readr)
read.csv2("filename.csv",fill=TRUE,header=TRUE)

Importing xlsx into R, header contains dates that are converted

I imported data into R, but the column headers in the xlsx file, contain Date types, see sample here:
GrowthValue 15-May 15-Jun 15-Jul 15-Aug 15-Sep 15-Oct
So in the table header of the spreadsheet 15-May gets translated to the variable name X42505 in R.
I could not find anything in my searches. How do you preserve the Date in the header?
R doesn't like numbers as column names, so it adds X as a prefix.
You should avoid column names that start with numbers, but if that's what you want, here is a solution (source):
read.table(file, check.names=FALSE)
If you want to reference these columns, you quote them.
df$'15-May'

In R, how can I read a .txt file and all the individual values in the .txt file separately?

I am currently trying to read a .txt file in R and the file contains 561 different values per row. The 561 different values are all separated from each other by a space and the different rows containing the 561 values are all separated by a tab. So I was wondering, how can I go about to read in all the rows of the .txt file while also reading in all the different values in each row individually?
My code currently is
read.table("X_test_copy.txt", sep = "\t")
The problem with those code is that it reads all the 561 different values per row as one individual value in R, instead of different values.
Try this:
lines <- scan(file="path/to/file/file.ext", what="", sep="\t") # input as character
my.df <- read.table(text=lines)

data frame into matrix in R

I need to import data from csv file and later do computations with it as with usual matrix in R. Data in csv file contain only numbers except variable names in header. I used following commands:
XX <- read.table("C:/Users/.../myfile.csv", header = TRUE)
and got something resembling a matrix, with numbers separated with comas. Then:
X<- as.matrix(sapply(XX, as.numeric))
which gave me just column vector with strange numbers. What am i doing wrong? Thanks for help!

Resources