RODBC - sqlSave() does not write in an alias table in Oracle - r

I am writing data into a table belonging to another schema using the function sqlSave() from the package RODBC. The user of the other schema has issued an alias with the same name as the original table. My user has enough rights to write into the table. The database is Oracle 11g.
This is how I write:
sqlSave(channel, object, tablename = table, safer=TRUE, rownames = FALSE, append = TRUE, verbose = FALSE, nastring = NULL, fast = TRUE)
When I run the sqlSave() I get an error message from the Oracle DB. If i look at the SQL which R sends to the DB I see that R doubles the columns of the object I try to write. The SQL looks like so:
insert into table (column_A, column_B, column_A, column_B)
If the alias is removed and I use the schema as prefix to the table than I do not get any error message however R does not execute at all the query.
sqlSave(channel, object, tablename = schema.table, safer=TRUE, rownames = FALSE, append = TRUE, verbose = FALSE, nastring = NULL, fast = TRUE)
Then I get:
insert into table (column_A, column_B) values(?,?)
The only thing it worked till now is to assign to the table a different alias as the table Name. In that case I manage to write in the table.
I would very much appreciate if anybody can suggest a solution to my Problem.
Thanks in advance for your response

Related

In R connected to an Access Database through ODBC, how can I update a text field?

I have R linked to an Access database using the ODBC and DBI packages. I scoured the internet and couldn't find a way to write an update query, so I'm using the dbSendStatement function to update entries individually. Combined with a for loop, this effectively works like an update query with one snag - when I try to update any field in the database that is text I get an error that says "[Microsoft][ODBC Microsoft Access Driver] One of your parameters is invalid."
DBI::dbSendStatement(conn = dB.Connection, statement = paste("UPDATE DC_FIMs_BLDG_Lvl SET kWh_Rate_Type = ",dquote(BLDG.LVL.Details[i,5])," WHERE FIM_ID = ",BLDG.LVL.Details[i,1]," AND BUILDING_ID = ",BLDG.LVL.Details[i,2],";", sep = ""))
If it's easier, when pasted, the code reads like this:
DBI::dbSendStatement(conn = dB.Connection, statement = paste("UPDATE DC_FIMs_BLDG_Lvl SET kWh_Rate_Type = “Incremental” WHERE FIM_ID = 26242807 AND BUILDING_ID = 515;", sep = ""))

Unable to write dataframe in R as Update-Statement to Postgis/PostgresSQL

I have the following dataframe:
library(rpostgis)
library(RPostgreSQL)
library(glue)
df<-data.frame(elevation=c(450,900),
id=c(1,2))
Now I try to upload this to a table in my PostgreSQL/Postgis database. My connection (dbConnect) is working for "SELECT"-Statements properly. However, I tried two ways of updating a database table with this dataframe and both failed.
First:
pgInsert(postgis,name="fields",data.obj=df,overwrite = FALSE, partial.match = TRUE,
row.names = FALSE,upsert.using = TRUE,df.geom=NULL)
2 out of 2 columns of the data frame match database table columns and will be formatted for database insert.
Error: x must be character or SQL
I do not know what the error is trying to tell me as both the values in the dataframe and table are set to integer.
Second:
sql<-glue_sql("UPDATE fields SET elevation ={df$elevation} WHERE
+ id = {df$id};", .con = postgis)
> sql
<SQL> UPDATE fields SET elevation =450 WHERE
id = 1;
<SQL> UPDATE fields SET elevation =900 WHERE
id = 2;
dbSendStatement(postgis,sql)
<PostgreSQLResult>
In both cases no data is transferred to the database and I do not see any Error logs within the database.
Any hint on how to solve this problem?
It is a mistake from my site, I got glue_sql wrong. To correctly update the database with every query created by glue_sql you have to loop through the created object like the following example:
for(i in 1:max(NROW(sql))){
dbSendStatement(postgis,sql[i])
}

Appending new data to sqlite db in R

I have created a table in a sqlite3 database from R using the following code:-
con <- DBI::dbConnect(drv = RSQLite::SQLite(),
dbname="data/compfleet.db")
s<- sprintf("create table %s(%s, primary key(%s))", "PositionList",
paste(names(FinalTable), collapse = ", "),
names(FinalTable)[2])
dbGetQuery(con, s)
dbDisconnect(con)
The second column of the table is UID which is the primary key. I then run a script to update the data in the table. The updated data could contain the same UID which already exists in the table. I don't want these existing records to be updated and just want the new records(with new UID values) to be appended to this database. The code I am using is:-
DBI::dbWriteTable(con, "PositionList", FinalTable, append=TRUE, row.names=FALSE, overwite=FALSE)
Which returns an error:
Error in result_bind(res#ptr, params) :
UNIQUE constraint failed: PositionList.UID
How can I achieve the task of appending only the new UID values without changing the existing UID values even if they appear when I run my updation script?
You can query the existing UIDs (as a one-column data frame) and remove corresponding rows from the table you want to insert.
uid_df <- dbGetQuery(con, "SELECT UID FROM PositionList")
dbWriteTable(con, "PositionList", FinalTable[!(FinalTable$UID %in% uid_df[[1]]), ], ...)
When you are going to insert data,first get the data from database by using UID.If data is exist nothing to do else insert new data with new UID.Duplicate Primary Key (UID) recard is not exist ,so it show the error.

Why dbWriteTable in R is not able to write data in non default schema in HANA?

Why dbWriteTable is not able to write data in non default schema in HANA?
dbcConnection_Test1 <- dbConnect(jdbcDriver, "jdbc:sap://crbwhd12:30215?SCHEMA_NAME/TABLE_NAME", "Username", "Password")
dbWriteTable(jdbcConnection_Test1, name = "TABLE_NAME", value = DataFrame1, overwrite = FALSE, append = TRUE)
I have established connection between R and HANA.
Then I have specified HANA Table name in which the data needs to be updated.
I am getting an error message saying that the table name is invalid and it is not available in the default schema. I want to upload the data in some other schema

Data from ODBC blob not matching return from SQL query

I’m reading a BLOB field from an ODBC data connection (the BLOB field is a file). I connect and query the database, returning the blob and the filename. The blob itself does not contain the same data as I find in the database however. My code is as follows along with the data returned vs in the DB.
library(RODBC)
sqlret<-odbcConnect('ODBCConnection')
qry<-'select content,Filename from document with(nolock) where documentid = \'xxxx\''
df<-sqlQuery(sqlret,qry)
close(sqlret)
rootpath<-paste0(getwd(),'/DocTest/')
dir.create(rootpath,showWarnings = FALSE)
content<-unlist(df$content)
fileout<-file(paste0(rootpath,df$Filename),"w+b")
writeBin(content, fileout)
close(fileout)
database blob is
0x50726F642050434E203A0D0A35363937313533320D0A33383335323133320D0A42463643453335380D0A0D0A574C4944203A0D0A0D0…
the dataframe’s content is
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004b020000000000000d0000f1000000008807840200000000d0f60c0c0000…
The filenames match up, as does the size of the content/blob.
The exact approach you take may vary depending on your ODBC driver. I'll demonstrate how I do this on MS SQL Server, and hopefully you can adapt it to your needs.
I'm going to use a table in my database called InsertFile with the following definition:
CREATE TABLE [dbo].[InsertFile](
[OID] [int] IDENTITY(1,1) NOT NULL,
[filename] [varchar](50) NULL,
[filedata] [varbinary](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Now let's create a file that we will push into the database.
file <- "hello_world.txt"
write("Hello world", file)
I need to do a little work to prep the byte code for this file to go into SQL. I use this function for that.
prep_file_for_sql <- function(filename){
bytes <-
mapply(FUN = readBin,
con = filename,
what = "raw",
n = file.info(filename)[["size"]],
SIMPLIFY = FALSE)
chars <-
lapply(X = bytes,
FUN = as.character)
vapply(X = bytes,
FUN = paste,
collapse = "",
FUN.VALUE = character(1))
}
Now, this is a bit strange, but the SQL Server ODBC driver is pretty good at writing VARBINARY columns, but terrible at reading them.
Coincidentally, the SQL Server Native Client 11.0 ODBC driver is terrible at writing VARBINARY columns, but okay-ish with reading them.
So I'm going to have two RODBC objects, conn_write and conn_read.
conn_write <-
RODBC::odbcDriverConnect(
paste0("driver=SQL Server; server=[server_name]; database=[database_name];",
"uid=[user_name]; pwd=[password]")
)
conn_read <-
RODBC::odbcDriverConnect(
paste0("driver=SQL Server Native Client 11.0; server=[server_name]; database=[database_name];",
"uid=[user_name]; pwd=[password]")
)
Now I'm going to insert the text file into the database using a parameterized query.
sqlExecute(
channel = conn_write,
query = "INSERT INTO dbo.InsertFile (filename, filedata) VALUES (?, ?)",
data = list(file,
prep_file_for_sql(file)),
fetch = FALSE
)
And now to read it back out using a parameterized query. The unpleasant trick to use here is recasting your VARBINARY property as a VARBINARY (don't ask me why, but it works).
X <- sqlExecute(
channel = conn_read,
query = paste0("SELECT OID, filename, ",
"CAST(filedata AS VARBINARY(8000)) AS filedata ",
"FROM dbo.InsertFile WHERE filename = ?"),
data = list("hello_world.txt"),
fetch = TRUE,
stringsAsFactors = FALSE
)
Now you can look at the contents with
unlist(X$filedata)
And write the file with
writeBin(unlist(X$filedata),
con = "hello_world2.txt")
BIG DANGEROUS CAVEAT
You need to be aware of the size of your files. I usually store files as a VARBINARY(MAX), and SQL Server isn't very friendly about exporting those through ODBC (I'm not sure about other SQL Engines; see RODBC sqlQuery() returns varchar(255) when it should return varchar(MAX) for more details)
The only way I've found to get around this is to recast the VARBINARY(MAX) as a VARBINARY(8000). That obviously is a terrible solution if you have more than 8000 bytes in your file. When I need to get around this, I've had to loop over the VARBINARY(MAX) column and created multiple new columns each of length 8000, and then paste them all together in R. (check out: Reconstitute PNG file stored as RAW in SQL Database)
As of yet, I've not come up with a generalized solution to this problem. Perhaps that's something I should spend more time on, though.
The limit of the 8000 is imposed by the ODBC driver and not by the RODBC, DBI or odbc packages.
Use the latest driver to remove the limitation: ODBC Driver 17 for SQL Server
https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-2017
There is no need to convert column to VARBINARY with this latest driver.
Following should work
X <- sqlExecute(
channel = conn_read,
query = paste0("SELECT OID, filename, ",
"filedata ",
"FROM dbo.InsertFile WHERE filename = ?"),
data = list("hello_world.txt"),
fetch = TRUE,
stringsAsFactors = FALSE
)

Resources