R - dbWriteTable makes capital letters in column names - r

I am trying to create a table (in Snowflake db) with exactly the same column names as I keep in the R data.frame object:
'data.frame': 1 obs. of 26 variables:
$ Ship_To : chr "0002061948"
$ Del_Coll_Indicator : chr "D"
$ Currency : chr "GBP"
$ Total_Volume : num 0
$ Total_Quantity : num 0
...
There is no problem with the table creation:
dbWriteTable(con = my_db$con, name = "test5", value = df)
but all column names in the database are converted to upper cases:
'data.frame': 1 obs. of 26 variables:
$ SHIP_TO : chr "0002061948"
$ DEL_COLL_INDICATOR : chr "D"
$ CURRENCY : chr "GBP"
...
Is there any way to keep in the table original names from R's data frame?

As covered by Snowflake's SQL reference docs, when identifiers (such as column names) are unquoted at creation, Snowflake will upper case them, and treat them as case-insensitive. Any quoted identifiers will be kept as-is and treated as a case-sensitive identifier.
Alter the data frame column names (colnames(df)) to use a quoted identifier format via the dbQuoteIdentifier(my_db$con, each_column_name) DBI function. This should help preserve the casing.

Related

CHAR column not recognizing text values in R data.table

I have a data.table with an 'id' column that needs to be treated as CHAR but only identifies numeric values.
> str(relatorio)
Classes ‘data.table’ and 'data.frame': 98010 obs. of 3 variables
$ id : chr NA "9074214401" "19136560472" "55171117420" ...
$ SITUACAO: chr "ATIVA" "ATIVA" "ATIVA" "ATIVA" ...
$ curso : chr "Não" "Não" "Não" "Não" ...
> relatorio[id == "08789441419"]
Empty data.table (0 rows and 3 cols): id,SITUACAO,curso
> relatorio[id == 08789441419]
id SITUACAO curso
1: 8789441419 ATIVA Não
I've tried converting the column using as.character() but nothing work. I have tried reading the table from .csv and .xlsx but still get the same thing. Any thoughts on why the 'id' columns does not recognize the string vale "08789441419"?
Thanks!

remove the extra information (attr) after reading file spss file using read_sav

I used read_sav() to read SPSS file in R.
How do I remove the extra information (attr).
I don't know how to create reprex for this question, but I have a sample below. I wish to remove attr from the column PersonID and convert it into normal dataframe/tibble
Thanks
'data.frame': 543 obs. of 1 variable:
$ PersonID : num 1 2 3 4 5 6 7 8 9 10 ...
..- attr(*, "label")= chr "Person identifier"
..- attr(*, "format.spss")= chr "F8.0"
To remove all the attributes of the column you can use :
attributes(data$PersonID) <- NULL
To remove only specific ones you can do :
attr(data$PersonID, 'format.spss') <- NULL
To remove all attributes from all the columns :
data[] <- lapply(data, function(x) {attributes(x) <- NULL;x})
We can also use zap_labels and zap_formats from haven.
library(haven)
data <- zap_formats(zap_labels(data))

Can't write data frame to database

I can't really create a code example because I'm not quite sure what the problem is and my actual problem is rather involved. That said it seems like kind of a generic problem that maybe somebody's seen before.
Basically I'm constructing 3 different dataframes and rbinding them together, which is all as expected smooth sailing but when I try to write that merged frame back to the DB I get this error:
Error in .External2(C_writetable, x, file, nrow(x), p, rnames, sep, eol, :
unimplemented type 'list' in 'EncodeElement'
I've tried manually coercing them using as.data.frame() before and after the rbinds and the returned object (the same one that fails to write with the above error message) exists in the environment as class data.frame so why does dbWriteTable not seem to have got the memo?
Sorry, I'm connecting to a MySQL DB using RMySQL. The problem I think as I look a little closer and try to explain myself is that the columns of my data frame are themselves lists (of the same length), which sorta makes sense of the error. I'd think (or like to think anyways) that a call to as.data.frame() would take care of that but I guess not?
A portion of my str() since it's long looks like:
.. [list output truncated]
$ stcong :List of 29809
..$ : int 3
..$ : int 8
..$ : int 4
..$ : int 2
I guess I'm wondering if there's an easy way to force this coercion?
Hard to say for sure, since you provided so little concrete information, but this would be one way to convert a list column to an atomic vector column:
> d <- data.frame(x = 1:5)
> d$y <- as.list(letters[1:5])
> str(d)
'data.frame': 5 obs. of 2 variables:
$ x: int 1 2 3 4 5
$ y:List of 5
..$ : chr "a"
..$ : chr "b"
..$ : chr "c"
..$ : chr "d"
..$ : chr "e"
> d$y <- unlist(d$y)
> str(d)
'data.frame': 5 obs. of 2 variables:
$ x: int 1 2 3 4 5
$ y: chr "a" "b" "c" "d" ...
This assumes that each element of your list column is only a length one vector. If any aren't, things will be more complicated, and you'd likely need to rethink your data structure anyhow.

Replace row in data.frame

I have a dataframe which looks like that:
'data.frame': 3036 obs. of 751 variables:
$ X : chr "01.01.2002" "02.01.2002" "03.01.2002" "04.01.2002" ...
$ A: chr "na" "na" "na" "na" ...
$ B: chr "na" "1,827437365" "0,833922973" "-0,838923572" ...
$ C: chr "na" "1,825300613" "0,813299479" "-0,866639008" ...
$ D: chr "na" "1,820482187" "0,821374034" "-0,875963104" ...
...
I have converted the X row into a date format.
dates <- as.Date(dataFrame$X, '%d.%m.%Y')
Now I want to replace this row. The thing is I cannot create a new dataframe because I after D there are coming over 1000 more rows...
What would be a possible way to do that easily?
I think what you want is simply:
dataFrame$X <- dates
if you you want to do is replace column X with dates. If you want to remove column X, simply do the following:
dataFrame$X <- NULL
(edited with more concise removal method provided by user #shujaa)

Bad interpretation of #N/A using `fread`

I am using data.table fread() function to read some data which have missing values and they were generated in Excel, so the missing values string is "#N/A". However, when I use the na.strings command the final str of the read data is still character. To replicate this, here is code and data.
Data:
Date,a,b,c,d,e,f,g
1/1/03,#N/A,0.384650146,0.992190069,0.203057232,0.636296656,0.271766148,0.347567706
1/2/03,#N/A,0.461486974,0.500702057,0.234400718,0.072789936,0.060900352,0.876749487
1/3/03,#N/A,0.573541006,0.478062582,0.840918789,0.061495666,0.64301024,0.939575302
1/4/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/5/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/6/03,#N/A,0.66678429,0.897482818,0.569609033,0.524295691,0.132941158,0.194114347
1/7/03,#N/A,0.576835985,0.982816576,0.605408973,0.093177815,0.902145012,0.291035649
1/8/03,#N/A,0.100952961,0.205491093,0.376410642,0.775917986,0.882827749,0.560508499
1/9/03,#N/A,0.350174456,0.290225065,0.428637309,0.022947911,0.7422805,0.354776101
1/10/03,#N/A,0.834345466,0.935128099,0.163158666,0.301310627,0.273928596,0.537167776
1/11/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/12/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/13/03,#N/A,0.325914633,0.68192633,0.320222677,0.249631582,0.605508964,0.739263677
1/14/03,#N/A,0.715104989,0.639040211,0.004186366,0.351412982,0.243570606,0.098312443
1/15/03,#N/A,0.750380716,0.264929325,0.782035411,0.963814327,0.93646428,0.453694758
1/16/03,#N/A,0.282389354,0.762102103,0.515151803,0.194083842,0.102386764,0.569730516
1/17/03,#N/A,0.367802161,0.906878948,0.848538256,0.538705673,0.707436236,0.186222899
1/18/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/19/03,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A,#N/A
1/20/03,#N/A,0.79933188,0.214688799,0.37011313,0.189503843,0.294051763,0.503147404
1/21/03,#N/A,0.620066341,0.329949446,0.123685075,0.69027192,0.060178071,0.599825005
(data saved in temp.csv)
Code:
library(data.table)
a <- fread("temp.csv", na.strings="#N/A")
gives (I have larger dataset so neglect the number of observations):
Classes ‘data.table’ and 'data.frame': 144 obs. of 8 variables:
$ Date: chr "1/1/03" "1/2/03" "1/3/03" "1/4/03" ...
$ a : chr NA NA NA NA ...
$ b : chr "0.384650146" "0.461486974" "0.573541006" NA ...
$ c : chr "0.992190069" "0.500702057" "0.478062582" NA ...
$ d : chr "0.203057232" "0.234400718" "0.840918789" NA ...
$ e : chr "0.636296656" "0.072789936" "0.061495666" NA ...
$ f : chr "0.271766148" "0.060900352" "0.64301024" NA ...
$ g : chr "0.347567706" "0.876749487" "0.939575302" NA ...
- attr(*, ".internal.selfref")=<externalptr>
This code works fine
a <- read.csv("temp.csv", header=TRUE, na.strings="#N/A")
Is it a bug? Is there some smart workaround?
The documentation from ?fread for na.strings reads:
na.strings A character vector of strings to convert to NA_character_. By default for columns read as type character ",," is read as a blank string ("") and ",NA," is read as NA_character_. Typical alternatives might be na.strings=NULL or perhaps na.strings = c("NA","N/A","").
You should convert them to numeric yourself after, I suppose. At least this is what I understand from the documentation.
Something like this?
cbind(a[, 1], a[, lapply(.SD[, -1], as.numeric)])

Resources