I'm writing to sqlite db from R using the following command:
dbWriteTable(con, 'topics',as.data.frame(topics), row.names = NA, overwrite = FALSE, append = TRUE, field.types = NULL)
I get the following table in sqlite:
How can I rename the row_names attribute?
The df [as.data.frame(topics)] snippet is:
This is what the row.names argument to dbWriteTable is for: Set it to a character value to rename the column, set it to NULL to avoid writing it altogether. Explore the guessRowName() function in the DBI package for other options.
Related
I use DBI package to write values to my database tables. Database is PostgreSQL.
My data looks like this. Some of my values have 0 digits, some have 1:
I get this data from reading csv using xlsx library.
I use this code to write data to my table:
DBI::dbWriteTable(conn = con,
name = Id(schema = "schema", table = 'table'),
value = df,
append=T,)
But in database I end up with this:
Column types of min_limit and max_limit in database are numeric.
I tried to use:
DBI::dbWriteTable(conn = con,
name = Id(schema = "schema", table = 'table'),
value = format(df, digits = 2),
append=T,)
But this gives me error:
> Error while preparing parameters ERROR: column "row_names" of relation "table" does not exist
What do I need to do to write rounded to 2 digits values to database table?
I'm just starting my journey with r, so I'm a complete newbie and I can't find anything that will help me solve this.
I have a csv table (random integers in each column) with 9 columns. I read 8 and I want to append them to a sql table with 8 fields (Col1 ... 8, all int's). After uploading the csv into rStudio, it looks right and only has 8 columns:
The code I'm using is:
# Libraries
library(DBI)
library(odbc)
library(tidyverse )
# CSV Files
df = head(
read_delim(
"C:/Data/test.txt",
" ",
trim_ws = TRUE,
skip = 1,
skip_empty_rows = TRUE,
col_types = cols('X7'=col_skip())
)
, -1
)
# Add Column Headers
col_headings <- c('Col1', 'Col2', 'Col3', 'Col4', 'Col5', 'Col6', 'Col7', 'Col8')
names(df) <- col_headings
# Connect to SQL Server
con <- dbConnect(odbc(), "SQL", timeout = 10)
# Append data
dbAppendTable(conn = con,
schema = "tmp",
name = "test",
value = df,
row.names = NULL)
I'm getting this error message:
> Error in result_describe_parameters(rs#ptr, fieldDetails) :
> Query requires '8' params; '18' supplied.
I ran into this issue also. I agree with Hayward Oblad, the dbAppendTable function appears to be finding another table of the same name throwing the error. Our solution was to specify the name parameter as an Id() (from DBI::Id())
So taking your example above:
# Append data
dbAppendTable(conn = con,
name = Id(schema = "tmp", table = "test"),
value = df,
row.names = NULL)
Ran into this issue...
Error in result_describe_parameters(rs#ptr, fieldDetails) : Query
requires '6' params; '18' supplied.
when saving to a snowflake database and couldn't find any good information on the error.
Turns out that there was a test schema where the tables within the schema had exactly the same names as in the prod schema. DBI::dbAppendTable() doesn't differentiate the schemas, so until those tables in the text schema got renamed to unique table names, the params error persisted.
Hope this saves someone the 10 hours I spent trying to figure out why DBI was throwing the error.
See he for more on this.
ODBC/DBI in R will not write to a table with a non-default schema in R
add the name = Id(schema = "my_schema", table = "table_name") to DBI::dbAppendTable()
or in my case it was the DBI::dbWriteTable().
Not sure why the function is not using the schema from my connection object though.. seems redundant.
I am trying to load an example dataset from here: http://www.agrocampus-ouest.fr/math/RforStat/decathlon.csv to run an example PCA.
The correctly loaded data frame can be replicated with this line of code:
decathlon = read.csv('http://www.agrocampus-ouest.fr/math/RforStat/decathlon.csv',
header = TRUE, row.names = 1, check.names = FALSE,
dec = '.', sep = ';')
However, I was wondering if this can be simulated with function(s) from readr package. Suitable function for this seems to be read_csv2, however, the row.names command is not available:
dplyrtlon = read_csv2('http://www.agrocampus-ouest.fr/math/RforStat/decathlon.csv',
col_names = TRUE, col_types = NULL, skip = 0)
Any suggestion on how to do this within readr?
readr returns tibbles instead of data frames. Tibbles are much faster and memory efficient than data frames but do not support row names.
Depending on what you want to do with your data after reading it in, you could either, add a column name to the first column (it looks like last names):
dplyrtlon <- read_csv2('http://www.agrocampus-ouest.fr/math/RforStat/decathlon.csv',
col_types = NULL, skip = 0)
names(dplyrtlon)[1] <- "last_name"
or you could convert the variable to a data frame, and use the content of the first column to set up row names:
r <- as.data.frame(dplyrtlon)
rownames(r) <- r[, 1]
r <- r[, -1]
I have dumped several .txt files to an SQLite database on my computer's hard disk using RSQLite package. Since the .txt files have no headers, I have to use the "header = FALSE" argument. Here is how my codes look:
for (i in (1:8)) {
dbWriteTable(conn = db, name = tbls[i], value = paths[i],
row.names = FALSE, header = FALSE, sep = "\t",
overwrite = TRUE)
}
Now I want to add column names to the tables in the SQLite database, how can I do that?
I am looping through some data, and appending it to csv file. What I want is to have column names on the top of the file once, and then as it loops to not repeat column names in the middle of file.
If I do col.names=T, it repeats including column names for each new loop. If I have col.names=F, there are no column names at all.
How do I do this most efficiently? I feel that this is such a common case that there must be a way to do it, without writing code especially to handle it.
write.table(dd, "data.csv", append=TRUE, col.names=T)
See ?file.exists.
write.table(dd, "data.csv", append=TRUE, col.names=!file.exists("data.csv"))
Thus column names are written only when you are not appending to a file that already exists.
You may or may not also see a problem with the row names being identical, as write.table does not allow identical row names when appending. You could give this a try. In the first write to file, try write.table with row.names = FALSE only. Then, starting from the second write to file, use both col.names = FALSE and row.names = FALSE
Here's the first write to file
> d1 <- data.frame(A = 1:5, B = 1:5) ## example data
> write.table(d1, "file.txt", row.names = FALSE)
We can check it with read.table("file.txt", header = TRUE). Then we can append the same data frame to that file with
> write.table(d1, "file.txt", row.names = FALSE,
col.names = FALSE, append = TRUE)
And again we can check it with read.table("file.txt", header = TRUE)
So, if you have a list of data frames, say dlst, your code chunk that appends the data frames together might look something like
> dlst <- rep(list(d1), 3) ## list of example data
> write.table(dlst[1], "file.txt", row.names = FALSE)
> invisible(lapply(dlst[-1], write.table, "file.txt", row.names = FALSE,
col.names = FALSE, append = TRUE))
But as #MrFlick suggests, it would be much better to append the data frames in R, and then send them to file once. This would eliminate many possible errors/problems that could occur while writing to file. If the data is in a list, that could be done with
> dc <- do.call(rbind, dlst)
> write.table(dc, "file.txt")
Try changing the column names of the data frame using names() command in R and replace with the same names as existing and then try the dbWriteTable command keeping row.names = False. The issue will get solved.
e.g.
if your data frame df1 has columns as obs, name, age then
names(df1) <- c('obs','name','age')
and then try
dbWriteTable(conn, 'table_name', df1, append = T, row.names = F)