Error: BigQuery does not support temporary tables - r

I'm trying to join tables from two different datasets in the same project. How can I do this?
library(tidyverse)
library(bigrquery)
con1 <-
bConnect(
drv = bigrquery::bigquery(),
project = PROJECT,
dataset = "dataset_1"
)
con2 <-
bConnect(
drv = bigrquery::bigquery(),
project = PROJECT,
dataset = "dataset_2"
)
A <- con1 %>% tbl("A")
B <- con2 %>% tbl("B")
inner_join(A, B,
by = "key",
copy = T) %>%
collect()
Then I get the error: Error: BigQuery does not support temporary tables

The problem is most likely that you are using different connections to connect with the two tables. When you attempt this, R tries to copy data from one source into a temporary table on the other source.
See this question and the copy parameter in this documentation (its a different package, but the principle is the same).
The solution is to only use a single connection for all your tables. Something like this:
con <-
bConnect(
drv = bigrquery::bigquery(),
project = PROJECT,
dataset = "dataset_1"
)
A <- con %>% tbl("A")
B <- con %>% tbl("B")
inner_join(A, B,
by = "key") %>%
collect()
You may need to leave the dataset parameter blank in your connection string, or use in_schema to include the dataset name along with the table when you connect to a remote table. It's hard to be sure without knowing more about the structure of your database(s).

Related

Delete specific rows in specific table in a SQLite database

I have multiple Datatable in a SQLite database. I am trying to delete specific rows of a datatable using DBI package. Here is the code:
library(dplyr)
library(DBI)
con <- DBI::dbConnect(RSQLite::SQLite(), dbname = "C:\\DB2.sqlite" , password="password")
DBI::dbWriteTable(con,"data_iris",iris,overwrite=TRUE)
query<-"DELETE FROM data_iris WHERE Species = ?;"
specie<-'setosa'
res <- dbExecute(con,query,params = list(specie))
res
[1] 50
The above code works good. But why the following code does not work:
query <- 'DELETE FROM ? WHERE Species = ?;'
table_name<-"data_iris"
res <- dbExecute(con,query,params = c(table_name,specie))
#Error: near "?": syntax error
I can not use the first code since the table_name changes dynamically (in a shiny APP).

Can I run a BigQuery SQL query and then continue wrangling the data using dbplyr?

In another project working with Amazon Athena I could do this:
con <- DBI::dbConnect(odbc::odbc(), Driver = "path-to-driver",
S3OutputLocation = "location",
AwsRegion = "eu-west-1", AuthenticationType = "IAM Profile",
AWSProfile = "profile", Schema = "prod")
tbl(con,
# Run SQL query
sql('SELECT *
FROM TABLE')) %>%
# Without having collected the data, I could further wrangle the data inside the database
# using dplyr code
select(var1, var2) %>%
mutate(var3 = var1 + var2)
However, now using BigQuery I get the following error
con <- DBI::dbConnect(bigrquery::bigquery(),
project = "project")
tbl(con,
sql(
'SELECT *
FROM TABLE'
))
Error: dataset is not a string (a length one character vector).
Any idea if with BigQuery is not possible to do what I'm trying to do?
Not a BigQuery user, so can't test this, but from looking at this example it appears unrelated to how you are piping queries (%>%). Instead it appears BigQuery does not support receiving a tbl with an sql string as the second argument.
So it is likely to work when the second argument is a string with the name of the table:
tbl(con, "db_name.table_name")
But you should expect it to fail if the second argument is of type sql:
query_string = "SELECT * FROM db_name.table_name"
tbl(con, sql(query_string))
Other things to test:
Using odbc::odbc() to connect to BigQuery instead of bigquery::bigquery(). The problem could be caused by the bigquery package.
The second approach without the conversation to sql: tbl(con, query_string)

Can dplyr function work connected with SQL server?

I have a table in SQL server database, and I want to manipulate this table with dbplyr/dplyr in R packages.
library(odbc)
library(DBI)
library(tidyverse)
con <- DBI::dbConnect(odbc::odbc(),
Driver = "SQL Server",
Server = "xx.xxx.xxx.xxx",
Database = "stock",
UID = "userid",
PWD = "userpassword")
startday = 20150101
day = tbl(con, in_schema("dbo", "LogDay"))
I tried this simple dplyr function after connecting to remote database, but only to fail with error messages.
day %>%
mutate(ovnprofit = ifelse(stockCode == lead(stockCode,1),lead(priceOpen,1)/priceClose, NA)) %>%
select(logDate,stockCode, ovnprofit)
How can I solve this problem?
p.s. When I apply dplyr function after transforming 'day' into tibble first, it works. However, I want to apply dplyr function directly, not transforming into tibble because it's to time consuming and memory intensive.
The problem is most likely with the lead function. In R a data set has an ordering, but in SQL datasets are unordered and the order needs to be specified explicitly.
Note that the SQL code in the error message contains:
LEAD("stockCode", 1.0, NULL) OVER ()
That there is nothing in the brackets after the OVER suggests to me that SQL expects somethings here.
Two ways you can resolve this:
By using arrange before the mutate
By specifying the order_by argument of lead
# approach 1:
day %>%
arrange(logDate) %>%
mutate(ovnprofit = ifelse(stockCode == lead(stockCode,1),
lead(priceOpen,1)/priceClose,
NA)
) %>%
select(logDate,stockCode, ovnprofit)
# approach 2:
day %>%
mutate(ovnprofit = ifelse(stockCode == lead(stockCode,1, order_by = 'logDate'),
lead(priceOpen,1, order_by = 'logDate')/priceClose,
NA)
) %>%
select(logDate,stockCode, ovnprofit)
However, it also looks like you are only wanting to lead within each stockCode. This can be done by group_by. I would recommend the following:
output = day %>%
group_by(stockCode) %>%
arrange(logDate) %>%
mutate(next_priceOpen = lead(priceOpen, 1)) %>%
mutate(ovnprofit = next_priceOpen / priceClose)
select(logDate,stockCode, ovnprofit)
If you view the generated SQL with show_query(output) you should see the SQL OVER clause similar to the following:
LEAD(priceOpen, 1.0, NULL) OVER (PARTITION BY stockCode ORDER BY logDate)

R Updating database with dbi

I have worked little bit with DBI in R and first question is more of best practice, as currently appending new data to DB is taking more time than I hoped. Second is error that I'm receiving when trying to update old information in database. Here is my current workflow when inserting new data to existing table in DB:
con <- dbConnect(odbc(), "myDSN")
# Example table 1
tbl1 <- tibble(Key = c("A", "B", "C", "D", "E"),
Val = c(1, 2, 3, 4, 5))
# Original table in DB
dbWriteTable(con, "tbl1", tbl1, overwrite = TRUE)
# Link to Original table
db_tbl <- tbl(con, in_schema("dbo", "tbl1"))
# New data
tbl2 <- tibble(Key = c("D", "E", "F", "G", "H"),
val = c(10, 11, 12, 13, 14))
# Write it to Staging
dbWriteTable(con, "tbl1_staging", tbl2, overwrite = TRUE)
# Get a link to staging
db_tblStaging <- tbl(con, in_schema("dbo", "tbl1_staging"))
# Compare Info
not_in_db <- db_tblStaging %>%
anti_join(db_tbl, by="Key") %>%
collect()
# Append missing info to DB
dbWriteTable(con, "tbl1", not_in_db, append = TRUE)
# Voila!
dbReadTable(con, "tbl1")
That will do the trick, but I'm looking for better solution, as I hate the collect() part of the code, which means that I'm bringing something to in R memory (as far as I understand it) could be a problem in future, when I have bigger data. What I hoped would work is something like this, that would allow me to append new data to DB in a fly, without it visiting in memory.
# What I hoped to have
db_tblStaging %>%
anti_join(db_tbl, by="Key") %>%
dbWriteTable(con, "tbl1", ., append = TRUE)
Second problem is updating existing table. Here is what I tried, but error will emerge and can't figure it out. Here is link where I tried to copy the answer: How to pass data.frame for UPDATE with R DBI. I would like to update key E and D with new values in val.
# Trying to update tbl1
update_values <- db_tblStaging %>%
semi_join(db_tbl, by="Key") %>%
collect()
update <- dbSendQuery(con, 'UPDATE tbl1
SET "val" = ?
WHERE Key = ?')
dbBind(update, update_values)
Error in result_bind(res#ptr, as.list(params)) :
nanodbc/nanodbc.cpp:1587: 42000: [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Incorrect syntax near the keyword 'Key'.
Has the package changed in some way? I can't spot my syntax error.
Consider running pure SQL after your table staging uploads as it looks like you need the NOT EXISTS (to avoid duplicates) and UPDATE INNER JOIN (for existing records). This avoids any R client side query imports and exports.
And Key is a reserved word in SQL Server. Hence, escape it with square brackets.
apn_sql <- "INSERT INTO dbo.tbl (s.[Key], s.[Val])
SELECT s.[Key], s.[Val] FROM dbo.tbl_staging s
WHERE NOT EXISTS
(SELECT 1 FROM dbo.tbl t
WHERE t.[Key] = s.[Key])"
dbSendQuery(con, apn_sql)
upd_sql <- "UPDATE t
SET t.Val = s.Val
FROM dbo.tbl t
INNER JOIN dbo.tbl_staging s
ON t.[Key] = s.[Key]"
dbSendQuery(con, upd_sql)
Rextester demo
In fact, SQL Server has the MERGE query to handle both in one call:
MERGE dbo.tbl AS Target
USING (SELECT [Key], [Val] FROM dbo.tbl_staging) AS Source
ON (Target.[Key] = Source.[Key])
WHEN MATCHED THEN
UPDATE SET Target.Val = Source.Val
WHEN NOT MATCHED BY TARGET THEN
INSERT ([Key], [Val])
VALUES (Source.[Key], Source.[Val]);
Rextester demo

How can I append data to a PostgreSQL table with `dplyr` without `collect()`?

The table reg_data is a PostgreSQL table. It turns out to be faster to run the regressions in PostgreSQL. But, as I am running it for 100,000s of data sets, I want to do it data set by data set and append the results of each to a table.
Is there a way to append PostgreSQL data to a PostgreSQL table using native dplyr verbs? I'm not sure that there's a huge cost to bringing the data to R then sending them back to PostgreSQL (it's just 6 numbers and a couple of identifying fields), but it does seem inelegant.
library(dplyr)
pg <- src_postgres()
reg_data <- tbl(pg, "reg_data")
reg_results <-
reg_data %>%
summarize(r_squared=regr_r2(y, x),
num_obs=regr_count(y, x),
constant=regr_intercept(y, x),
slope=regr_slope(y, x),
mean_analyst_fog=regr_avgx(y, x),
mean_manager_fog=regr_avgy(y, x)) %>%
collect() %>%
as.data.frame()
# Push to database.
dbWriteTable(pg$con, c("bgt", "within_call_data"), reg_results,
append=TRUE, row.names=FALSE)
dplyr does not include commands to insert or update records in a database, so there is not a complete native dplyr solution for this. But you could combine dplyr with regular SQL statements to avoid bringing the data to R.
Let's start by reproducing your steps before the collect() statement
library(dplyr)
pg <- src_postgres()
reg_data <- tbl(pg, "reg_data")
reg_results <-
reg_data %>%
summarize(r_squared=regr_r2(y, x),
num_obs=regr_count(y, x),
constant=regr_intercept(y, x),
slope=regr_slope(y, x),
mean_analyst_fog=regr_avgx(y, x),
mean_manager_fog=regr_avgy(y, x))
Now, you could use compute() instead of collect() to create a temporary table in the database.
temp.table.name <- paste0(sample(letters, 10, replace = TRUE), collapse = "")
reg_results <- reg_results %>% compute(name=temp.table.name)
Where temp.table.name is a random table name. Using the option name = temp.table.name in compute we assign this random name to the temporary table created.
Now, we will use the library RPostgreSQL to create an insert query that uses the results stored in the temporary table. As the temporary table only lives in the connection created by src_postgresql() we need to reuse it.
library(RPostgreSQL)
copyconn <- pg$con
class(copyconn) <- "PostgreSQLConnection" # I get an error if I don't fix the class
Finally the insert query
sql <- paste0("INSERT INTO destination_table SELECT * FROM ", temp.tbl.name,";")
dbSendQuery(copyconn, sql)
So, everything is happening in the database and the data is not brought into R.
EDIT
Previous versions of this post did break encapsulation when we obtained temp.tbl.name from reg_results. This is avoided using the option name=in compute.
another option would be to use a command called sql_render() to create each SQL statement, and then another command called db_save_query() to create the table using a SQL statement and then a manual statement to append to the table. To loop through each query, the purrr commands: map and walk are used. Preferably, a command like compute() command should do this, but in lieu of that, the following is a fully reproducible example:
library(dplyr)
library(dbplyr)
library(purrr)
# Setting up a SQLite db with 3 tables
con <- DBI::dbConnect(RSQLite::SQLite(), path = ":memory:")
copy_to(con, filter(mtcars, cyl == 4), "mtcars1")
copy_to(con, filter(mtcars, cyl == 6), "mtcars2")
copy_to(con, filter(mtcars, cyl == 8), "mtcars3")
# Pre-process the SQL statements
tables <- c("mtcars1","mtcars2","mtcars3")
all_results <- tables %>%
map(~{
tbl(con, .x) %>%
summarise(avg_mpg = mean(mpg),
records = n()) %>%
sql_render()
})
# Execute the SQL statements, 1st one creates the table
# subsquent queries are insterted to the table
first_query <- TRUE
all_results %>%
walk(~{
if(first_query == TRUE){
first_query <<- FALSE
db_save_query(con, ., "results")
} else {
dbExecute(con, build_sql("INSERT INTO results ", .))
}
})
tbl(con, "results")
dbDisconnect(con)

Resources