R cannot import data through bigrquery package - !self$finished is not TRUE - bigrquery

when trying to pull larger datasets from my GBQ account, I hit this error message:
Error in pb_tick(self, private, len, tokens) : !self$finished is not TRUE
The table I am querying is ~105GB and the same SQL query works fine in the GBQ console. It's not a very complex query, just asking for 200MB of data. Query below. I can't find anything related to this error message, hoping y'all can help out
R code:
library(bigrquery)
project_id <- "xxx-xxx-xxx" # put your project ID here
sql <- "SELECT * FROMxxx-xxx-xxx.Conversion_Records.Conversion_records_2018_9_to_10`
WHERE event_time >= '2018-09-16 00:00:01'
AND lower(campaign_name) like 'camp_a%'
AND lower(vendor_name) = 'vendor_a';"
gbq <- query_exec(sql, project = project_id, use_legacy_sql = FALSE, max_pages = Inf)

Related

In R connected to an Access Database through ODBC, how can I update a text field?

I have R linked to an Access database using the ODBC and DBI packages. I scoured the internet and couldn't find a way to write an update query, so I'm using the dbSendStatement function to update entries individually. Combined with a for loop, this effectively works like an update query with one snag - when I try to update any field in the database that is text I get an error that says "[Microsoft][ODBC Microsoft Access Driver] One of your parameters is invalid."
DBI::dbSendStatement(conn = dB.Connection, statement = paste("UPDATE DC_FIMs_BLDG_Lvl SET kWh_Rate_Type = ",dquote(BLDG.LVL.Details[i,5])," WHERE FIM_ID = ",BLDG.LVL.Details[i,1]," AND BUILDING_ID = ",BLDG.LVL.Details[i,2],";", sep = ""))
If it's easier, when pasted, the code reads like this:
DBI::dbSendStatement(conn = dB.Connection, statement = paste("UPDATE DC_FIMs_BLDG_Lvl SET kWh_Rate_Type = “Incremental” WHERE FIM_ID = 26242807 AND BUILDING_ID = 515;", sep = ""))

Unable to write dataframe in R as Update-Statement to Postgis/PostgresSQL

I have the following dataframe:
library(rpostgis)
library(RPostgreSQL)
library(glue)
df<-data.frame(elevation=c(450,900),
id=c(1,2))
Now I try to upload this to a table in my PostgreSQL/Postgis database. My connection (dbConnect) is working for "SELECT"-Statements properly. However, I tried two ways of updating a database table with this dataframe and both failed.
First:
pgInsert(postgis,name="fields",data.obj=df,overwrite = FALSE, partial.match = TRUE,
row.names = FALSE,upsert.using = TRUE,df.geom=NULL)
2 out of 2 columns of the data frame match database table columns and will be formatted for database insert.
Error: x must be character or SQL
I do not know what the error is trying to tell me as both the values in the dataframe and table are set to integer.
Second:
sql<-glue_sql("UPDATE fields SET elevation ={df$elevation} WHERE
+ id = {df$id};", .con = postgis)
> sql
<SQL> UPDATE fields SET elevation =450 WHERE
id = 1;
<SQL> UPDATE fields SET elevation =900 WHERE
id = 2;
dbSendStatement(postgis,sql)
<PostgreSQLResult>
In both cases no data is transferred to the database and I do not see any Error logs within the database.
Any hint on how to solve this problem?
It is a mistake from my site, I got glue_sql wrong. To correctly update the database with every query created by glue_sql you have to loop through the created object like the following example:
for(i in 1:max(NROW(sql))){
dbSendStatement(postgis,sql[i])
}

Read a View created from a procedure in SAP HANA from R

I have schema in SAP HANA by the name "HYZ_ProcurementToSales" and View "V_HYZ_P25_Market_Market_Orders" which is created from a procedure, I am trying to extract the view in the R server version 1.0.153. The code I am using is:
library(RJDBC)
conn_server <- dbConnect(jdbcDriver,
"jdbc:sap:rdkom12.dhcp.pal.sap.corp:30015", "system",
"manager")
res <- dbGetQuery(conn,"select * from
HYZ_ProcurementToSales.V_HYZ_P25_Market_Market_Orders")
The error that I get is this:
"Unable to retrieve JDBC result set for
select * from HYZ_ProcurementToSales.V_HYZ_P25_Market_Market_Orders".
My belief is that something else instead of dbGetQuery will do the trick here. It works fine if I simply do
res <- dbGetQuery(conn,"select * from Tables")
The following works for me on HANA 1 SPS12 with a procedure that exposes a view called V_CURRENTUSERS:
library(RJDBC)
drv <- JDBC("com.sap.db.jdbc.Driver",
"C:\\Program Files\\SAP\\hdbclient\\ngdbc.jar",
identifier.quote='"')
conn <- dbConnect(drv, "jdbc:sap://<hanaserver>:3<instance>15/?", "**username**", "*pw*")
jusers <- dbFetch(dbSendQuery(conn = conn, statement = 'select * from v_currentusers;'))
At this point, the whole result set is bound to jusers.
Once finished you should release the result set again:
dbClearResult(jusers)
and finally close the connection
dbDisconnect(conn)
Be aware that procedures with result views are deprecated and should not be used/developed anymore. Instead, use table functions as these can also be reused in information views and allow for dynamic parameter assignment.

dplyr & monetdb - appropriate syntax for querying schema.table?

In monetdb I have set up a schema main and my tables are created into this schema.
For example, the department table is main.department.
With dplyr I try to query the table:
mdb <- src_monetdb(dbname="model", user="monetdb", password="monetdb")
tbl(mdb, "department")
But I get
Error in .local(conn, statement, ...) :
Unable to execute statement 'PREPARE SELECT * FROM "department"'.
Server says 'SELECT: no such table 'department'' [#42S02].
I tried to use "main.department" and other similar combinations with no luck.
What is the appropriate syntax?
There is a somewhat hacky workaround for this: We can manually set the default schema for the connection. I have a database testing, in there is a schema foo with a table called bar.
mdb <- src_monetdb("testing")
dbSendQuery(mdb$con, "SET SCHEMA foo");
t <- tbl(mdb, "bar")
The dbplyr package (a backend of dplyr for database connections) has a in_schema() function for these cases:
conn <- dbConnect(
MonetDB.R(),
host = "localhost",
dbname = "model",
user = "monetdb",
password = "monetdb",
timeout = 86400L
)
department = tbl(conn, dbplyr::in_schema("main", "department"))

RODBC - sqlSave() does not write in an alias table in Oracle

I am writing data into a table belonging to another schema using the function sqlSave() from the package RODBC. The user of the other schema has issued an alias with the same name as the original table. My user has enough rights to write into the table. The database is Oracle 11g.
This is how I write:
sqlSave(channel, object, tablename = table, safer=TRUE, rownames = FALSE, append = TRUE, verbose = FALSE, nastring = NULL, fast = TRUE)
When I run the sqlSave() I get an error message from the Oracle DB. If i look at the SQL which R sends to the DB I see that R doubles the columns of the object I try to write. The SQL looks like so:
insert into table (column_A, column_B, column_A, column_B)
If the alias is removed and I use the schema as prefix to the table than I do not get any error message however R does not execute at all the query.
sqlSave(channel, object, tablename = schema.table, safer=TRUE, rownames = FALSE, append = TRUE, verbose = FALSE, nastring = NULL, fast = TRUE)
Then I get:
insert into table (column_A, column_B) values(?,?)
The only thing it worked till now is to assign to the table a different alias as the table Name. In that case I manage to write in the table.
I would very much appreciate if anybody can suggest a solution to my Problem.
Thanks in advance for your response

Resources