Remove comma which is a thousands separator in R - r

I need to import a bunch of .csv files into R. I do this using the following code:
Dataset <- read.csv(paste0("./CSV/State_level/",file,".csv"),header = F,sep = ";",dec = "," , stringsAsFactors = FALSE)
The input is an .csv file with "," as separator for decimal places. Unfortunately there are quite a few entries as follows: 20,012,054.
This should really be: 20012,054 and leads to either NAs but usually the whole df being imported as character and not numeric which I'd like to have.
How do I get rid of the first "," when looking from left to right and only if the number has more than 3 figuers infront of the decimal-comma?
Here is a sample of how the data looks in the .csv-file:
A data.frame might look like this:
df<-data.frame(a=c(0.5,0.84,12.25,"20,125,25"), b=c("1,111,054",0.57,105.25,0.15))
I used "." as decimal separator in this case to make it a number, which in the .csv is a ",", but this is not the issue for numbers in the format: 123,45.
Thank you for your ideas & help!

We can use sub to get rid of the first ,
df[] <- lapply(df, function(x) sub(",(?=.*,)", "", x, perl = TRUE))
Just to show it would leave the , if there is only a single , in the code
sub(",(?=.*,)", "", c("0,5", "20,125,25"), perl = TRUE)
#[1] "0,5" "20125,25"

Related

NA introduced by coercion

I have a file a notepad txt file inflation.txt that looks something like this:
1950-1 0.0084490544865279
1950-2 −0.0050487986543660
1950-3 0.0038461526886055
1950-4 0.0214293914558992
1951-1 0.0232839389540449
1951-2 0.0299121323429455
1951-3 0.0379293285389640
1951-4 0.0212773984472849
From a previous stackoverflow post, I learned how to import this file into R:
data <- read.table("inflation.txt", sep = "" , header = F ,
na.strings ="", stringsAsFactors= F, encoding = "UTF-8")
However, this code reads the file as a character. When I try to convert this file to numeric format, all negative values are replaced with NA:
b=as.numeric(data$V2)
Warning message:
In base::as.numeric(x) : NAs introduced by coercion
> head(b)
[1] 0.008449054 NA 0.003846153 0.021429391 0.023283939 0.029912132
Can someone please show me what I am doing wrong? Is it possible to save the inflation.txt file as a data.frame?
I would read the file using space as a separator, then spin out two separate columns for the year and quarter from your R script:
data <- read.table("inflation.txt", sep = " ", header=FALSE,
na.strings="", stringsAsFactors=FALSE, encoding="UTF-8")
names(data) <- c("ym", "vals")
data$year <- as.numeric(sub("-.*$", "", data$ym))
data$month <- as.numeric(sub("^\\d+-", "", data$ym))
data <- data[, c("year", "month", "vals")]
The issue is that "−" that you have in your data is not minus sign (it is a dash), hence the data is being read as character.
You have two options.
Open the file in any text editor and find and replace all the "−" with negative sign and then using read.table would work directly.
data <- read.table("inflation.txt")
If you can't change the data in the original file then replace them with sub after reading the data into R.
data$V2 <- as.numeric(sub('−', '-', data$V2, fixed = TRUE))

Issues importing csv data into R where the data contains additional commas

I have a very large data set that for illustrative purposes looks something like the following.
Cust_ID , Sales_Assistant , Store
123 , Mary, Worthington, 22
456 , Jack, Charles , 42
The real data has many more columns and millions of rows. I'm using the following code to import it into R but it is falling over because one or more of the columns has a comma in the data (see Sales_Assistant above).
df <- read.csv("C:/dataextract.csv", header = TRUE , as.is = TRUE , sep = "," , na.strings = "NA" , quote = "" , fill = TRUE , dec = "." , allowEscapes = FALSE , row.names=NULL)
Adding row.names=NULL imported all the data but it split the Sales_Assistant column over two columns and threw all the other data out of alignment. If I run the code without this I get an error...
Error in read.table(file = file, header = header, sep = sep, quote = quote, : duplicate 'row.names' are not allowed
...and the data won't load.
Can you think of a way around this that doesn't involve tackling the data at source, or opening it in a text editor? Is there a solution in R?
First and foremost, it is a csv file. "Mary, Worthington" is meant to respond to two columns. If you have commas in your values, consider saving the data by using tsv (tab-separated values).
However, if you data has equal amount of commas per row with good alignment in some sense, I would consider ignoring the first row (which is the column names as you read the file) of the data frame and reassigning it proper column names.
For instance, in your case you can replace Sales_Assistant by
Sales_Assistant_First_Name, Sales_Assistant_Last_Name
which makes perfect sense. Then I could basically do
df <- df[-1, ]
colnames(df) <- c("Cust_ID" , "Sales_Assistant_First_Name" , "Sales_Assistant_Last_Name", "Store")
df <- read.csv("C:/dataextract.csv", skip = 1, header = FALSE)
df_cnames <- read.csv("C:/dataextract.csv", nrow = 1, header = FALSE)
df <- within(df, V2V3 <- paste(V2, V3, sep = ''))
df <- subset(df, select = (c("V1", "V2V3", "V4")))
colnames(df) <- df_cnames
It may need some modification depending on the actual source

Formatting a XLSX file in R into a custom text blob

I want to read a xlsx file and I want to convert the data in the file into a long text string. I want to format this string in an intelligent manner, such as each row is contained in parentheses “()”, and keep the data in a comma separated value string. So for example if this was the xlsx file looked like this..
one,two,three
x,x,x
y,y,y
z,z,z
after formatting the string would look like
header(one,two,three)row(x,x,x)row(y,y,y)row(z,z,z)
How would you accomplish this task with R?
my first instinct was something like this… but I can’t figure it out..
library(xlsx)
sheet1 <- read.xlsx("run_info.xlsx",1)
paste("(",sheet1[1,],")")
This works for me:
DF <- read.xlsx("run_info.xlsx",1)
paste0("header(", paste(names(DF), collapse = ","), ")",
paste(paste0("row(", apply(DF, 1, paste, collapse = ","), ")"),
collapse = ""))
# [1] "header(one,two,three)row(x,x,x)row(y,y,y)row(z,z,z)"

Numeric variables converted to factors when reading a CSV file

I'm trying to read a .csv file into R where all the column are numeric. However, they get converted to factor everytime I import them.
Here's a sample of how my CSV looks like:
This is my code:
options(StringsAsFactors=F)
data<-read.csv("in.csv", dec = ",", sep = ";")
As you can see, I set dec to , and sep to ;. Still, all the vectors that should be numerics are factors!
Can someone give me some advice? Thanks!
Your NA strings in the csv file, N/A, are interpreted as character and then the whole column is converted to character. If you have stringsAsFactors = TRUE in options or in read.csv (default), the column is further converted to factor. You can use the argument na.strings to tell read.csv which strings should be interpreted as NA.
A small example:
df <- read.csv(text = "x;y
N/A;2,2
3,3;4,4", dec = ",", sep = ";")
str(df)
df <- read.csv(text = "x;y
N/A;2,2
3,3;4,4", dec = ",", sep = ";", na.strings = "N/A")
str(df)
Update following comment
Although not apparent from the sample data provided, there is also a problem with instances of '$' concatenated to the numbers, e.g. '$3,3'. Such values will be interpreted as character, and then the dec = "," doesn't help us. We need to replace both the '$' and the ',' before the variable is converted to numeric.
df <- read.csv(text = "x;y;z
N/A;1,1;2,2$
$3,3;5,5;4,4", dec = ",", sep = ";", na.strings = "N/A")
df
str(df)
df[] <- lapply(df, function(x){
x2 <- gsub(pattern = "$", replacement = "", x = x, fixed = TRUE)
x3 <- gsub(pattern = ",", replacement = ".", x = x2, fixed = TRUE)
as.numeric(x3)
}
)
df
str(df)
You could have gotten your original code to work actually - there's a tiny typo ('stringsAsFactors', not 'StringsAsFactors'). The options command wont complain with the wrong text, but it just wont work. When done correctly, it'll read it as char, instead of factors. You can then convert columns to whatever format you want.
I just had this same issue, and tried all the fixes on this and other duplicate posts. None really worked all that well. The way I went about fixing it was actually on the excel side. If you highlight all the columns in your source file (in excel), right click==> format cells then select 'number' it'll import perfectly fine (so long as you have no non-numeric characters below the header)

Import csv file with both tab and quotes as separators into R

I have a dataset in csv with separators as displayed below.
NO_CAND";"DS_CARGO";"CD_CARGO";"NR_CAND";"SG_UE";"NR_CNPJ";"NR_CNPJ_1";
CLODOALDO JOSÉ DE RAMOS";"Deputado Estadual";"7";"22111";"PB";"08126218000107";"Encargos financeiros e taxas bancárias";
I am using the function read.csv2 with options
mydataframe <- read.csv2("filename.csv",header = T, sep=";", quote="\\'", dec=",",
stringsAsFactors=F, check.names = F, fileEncoding="latin1")
The code reads in the data, but with all the quotes.
I have tried to delete the quotes using
mydataframe[,] <- apply(mydataframe[,], c(1,2), function(x) {
gsub("\\'", "", x)
})
but it doesn't work.
Any ideas on how I could import the data getting rid of these quotes?
Many thanks.
To delete the quotes, use lapply and gsub as follows.
mydataframe[] <- lapply(mydataframe, function(x) gsub("\"", "", x))
lapply iterates over all columns of the data frame and returns a list; by having mydataframe[] on the LHS of the assignment, you assign the results back into the data frame without losing its attributes (dimensions, names, etc). Also, you don't have any single quotes ' in your data, so searching for them won't achieve anything.

Resources