Using R, I tried to insert a data frame. My script looks like below:
con <- dbConnect(RMySQL::MySQL(), username = "xxxxxx", password = "xxxxxx",host = "127.0.0.1", dbname = "xxxxx")
dbWriteTable(conn=con,name='table',value=as.data.frame(df), append = TRUE, row.names = F)
dbDisconnect(con)
WHen the script hit below line:
dbWriteTable(conn=con,name='table',value=as.data.frame(df), append = TRUE, row.names = F)
I got an error like below:
Error in .local(conn, statement, ...) : could not run statement: Invalid utf8 character string: 'M'
I am not sure why this error occurred. This is a part of script that has been run well on another machine.
Please advise
You should create proper connection, then only can insert data frame to your DB.
for creating connection username & password, host name & data base name should correct. same code only but i removed some parameter
try this:
mydb = dbConnect(MySQL(), user='root', password='password', dbname='my_database', host='localhost')
i just insert Iris data in my_database
data(iris)
dbWriteTable(mydb, name='db', value=iris)
i inserted iris data frame in the name of db in my_database
Related
I'm just starting my journey with r, so I'm a complete newbie and I can't find anything that will help me solve this.
I have a csv table (random integers in each column) with 9 columns. I read 8 and I want to append them to a sql table with 8 fields (Col1 ... 8, all int's). After uploading the csv into rStudio, it looks right and only has 8 columns:
The code I'm using is:
# Libraries
library(DBI)
library(odbc)
library(tidyverse )
# CSV Files
df = head(
read_delim(
"C:/Data/test.txt",
" ",
trim_ws = TRUE,
skip = 1,
skip_empty_rows = TRUE,
col_types = cols('X7'=col_skip())
)
, -1
)
# Add Column Headers
col_headings <- c('Col1', 'Col2', 'Col3', 'Col4', 'Col5', 'Col6', 'Col7', 'Col8')
names(df) <- col_headings
# Connect to SQL Server
con <- dbConnect(odbc(), "SQL", timeout = 10)
# Append data
dbAppendTable(conn = con,
schema = "tmp",
name = "test",
value = df,
row.names = NULL)
I'm getting this error message:
> Error in result_describe_parameters(rs#ptr, fieldDetails) :
> Query requires '8' params; '18' supplied.
I ran into this issue also. I agree with Hayward Oblad, the dbAppendTable function appears to be finding another table of the same name throwing the error. Our solution was to specify the name parameter as an Id() (from DBI::Id())
So taking your example above:
# Append data
dbAppendTable(conn = con,
name = Id(schema = "tmp", table = "test"),
value = df,
row.names = NULL)
Ran into this issue...
Error in result_describe_parameters(rs#ptr, fieldDetails) : Query
requires '6' params; '18' supplied.
when saving to a snowflake database and couldn't find any good information on the error.
Turns out that there was a test schema where the tables within the schema had exactly the same names as in the prod schema. DBI::dbAppendTable() doesn't differentiate the schemas, so until those tables in the text schema got renamed to unique table names, the params error persisted.
Hope this saves someone the 10 hours I spent trying to figure out why DBI was throwing the error.
See he for more on this.
ODBC/DBI in R will not write to a table with a non-default schema in R
add the name = Id(schema = "my_schema", table = "table_name") to DBI::dbAppendTable()
or in my case it was the DBI::dbWriteTable().
Not sure why the function is not using the schema from my connection object though.. seems redundant.
I have a problem with importing data from edb postgres into R (with RPostgres). I using this code:
library(RPostgres)
conn <- dbConnect(Postgres(), dbname = 'xx', host = 'xx', port = 'xx',
user = 'xx', password = 'xx',
options="-c search_path=xx")
a=dbGetQuery(conn, "SELECT * FROM xx")
After importing data into R is all Ok exept all column with date which become corruped. A few exemples:
postgres value: 2009-11-02 after importing in R: 0231-08-11
postgres value: 2009-08-12 after importing in R: 1206-01-15
data type in postgres for this column is date. If I read from postgres column (date) with query:
".. to_char(column_with_date, 'DD-MM-YYYY') as column_with_date .."
then I get ok data in R but data type in R is not date but character.
I tried the same with importing the same data from mysql base into R (with RMySQL) and in this case data are noncorrupted . Used code:
library(RMySQL)
conn <- dbConnect(MySQL(), user = "xx", password = "xx", dbname = "xx", host = "xx" )
a=dbGetQuery(conn,"select* from xx ")
Thank you
I have a table in Postgres database with some columns and values. I have imported this table into local memory, performed some computation on these columns and have a data frame with new values . Now I want this updated data frame to be placed back in the database in the same table.
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, host = "*****", port = "****",
dbname = "sai", user = "sai", password = "password")
saix_account_demo <- dbReadTable(con = con, name = "saix_account")
...
dbWriteTable(con, name = "saix_account", value = saix_account_demo,
row.names=FALSE, append=TRUE)`
I have performed dbWrtiteTable() with append==T and overwrite ==F. But I am facing an error saying primary key constraint violated. I understood the problem that I was trying to insert data instead of updating.
RJDBC connecting to Hive fine and also reading the data from Hive. But it is not writing data to Hive using --> dbWriteTable.
see below-
options(java.parameters = "-Xmx8g")
library(DBI)
library(rJava)
library(RJDBC)
cp <- c(list.files("/tmp/R_hive_libs/cloudera_hive_jars", pattern = "[.]jar", full.names=TRUE, recursive=TRUE),list.files("/tmp/R_hive_libs/R_hadoop_libs", pattern = "[.]jar", full.names=TRUE, recursive=TRUE),list.files("/tmp/R_hive_libs/R_hadoop_libs/lib", pattern = "[.]jar", full.names=TRUE, recursive=TRUE), recursive=TRUE)
drv <- JDBC(driverClass = "com.cloudera.hive.jdbc4.HS2Driver", classPath=cp)
conn <- dbConnect(drv, "jdbc:hive2://XXXXXX:10000/default", "user", "password")
show_databases <- dbGetQuery(conn, "show databases")
List_of_Tables <- dbListTables(conn)
data1 <- dbGetQuery(conn, "select * from XXX.xxx limit 10000")
data_to_write_back_to_hive <- data.frame(aggregate(data1$xxx.xxx, by=list(Month=data1$xxx.cmp_created_timestamp_month), FUN=sum))
data_to_write_back_to_hive[[2]] <-c(10,20)
colnames(data_to_write_back_to_hive) <- c("Month", "Energy")
dbWriteTable(conn, "xxxx.checking",data_to_write_back_to_hive)
How to write data back to hive? it is giving below error-
Error in .local(conn, statement, ...) : execute JDBC update query failed in dbSendUpdate ([Simba]HiveJDBCDriver ERROR
processing query/statement. Error Code: 40000, SQL state:
TStatus(statusCode:ERROR_STATUS,
infoMessages:[*org.apache.hive.service.cli.HiveSQLException:Error
while compiling statement: FAILED: ParseException line 1:36 mismatched
input 'PRECISION' expecting ) near 'DOUBLE' in create table
statement:28:27,
org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:326,
org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:102,
org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:171,
org.apache.hive.service.cli.operation.Operation:run:Operation.java:268,
org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:410,
org.apache.hive.service.cli.session.HiveSessionImpl:executeStatement:HiveSessionImpl.java:391,
sun.reflect.GeneratedMethodAccessor56:invoke::-1,
sun.reflect.DelegatingMeth
This question comes up a fair bit. I think the short answer is that you can't do what you want at present. The DBI/JDBC drivers don't metaprogram syntactically correct HiveQL.
I am using R to try and connect to a teradata database and am running into difficulties
The steps in the process are below
1) Create Connection
2) Create a VOLATILE TABLE
3) Load information from a data frame into the Volatile table
Here is where it fails, giving me an error message
Error in sqlSave(conn, mydata, tablename = "TEMP", rownames = FALSE, :
first argument is not an open RODBC channel
The code is below
# Import Data From Text File and remove duplicates
mydata = read.table("Keys.txt")
mydata.unique = unique(mydata)
strSQL.TempTable = "CREATE VOLATILE TABLE TEMP………[Table Details]"
"UNIQUE PRIMARY INDEX(index)"
"ON COMMIT PRESERVE ROWS;"
# Connect To Database
conn <- tdConnect('Teradata')
# Execute Temp Table
tdQuery(strSQL.TempTable)
sqlSave(conn, mydata, tablename = "TEMP ",rownames = FALSE, append = TRUE)
Can anyone help, Is it closing off the connection before I can upload the information into the Table?
My Mistake, I have been confusing libraries
Basically the lines
# Connect To Database
conn <- tdConnect('Teradata')
# Execute Temp Table
tdQuery(strSQL.TempTable)
sqlSave(conn, mydata, tablename = "TEMP ",rownames = FALSE, append = TRUE)
can all be replaced by this
# Connect To Database
channel <- odbcConnect('Teradata')
# Execute Temp Table
sqlQuery(channel, paste(strSQL.TempTable))
sqlSave(channel, mydata, table = "TEMP",rownames = FALSE, append = TRUE)
Now I'm being told, i don't have access to do this but this is another question for another forum
Thanks