I am trying to query data from ClickHouse database from R with subset.
Here is the example
library(data.table)
library(RClickhouse)
library(DBI)
subset <- paste(traffic[,unique(IDs)][1:30], collapse = ',')
conClickHouse <- DBI::dbConnect('here is the connection')
DataX <- dbgetdbGetQuery(conClickHouse, paste0("select * from database
and IDs in (", subset ,") ", sep = "") )
As a result I get error:
DB::Exception: Type mismatch in IN or VALUES section. Expected: FixedString(34).
Got: UInt64: While processing (IDs IN ....
Any help is appreciated
Thanks to the comment of #DennyCrane,
"select * from database where toFixedString(IDs,34) in
(toFixedString(ID1, 34), toFixedString(ID2,34 ))"
This query subset properly
https://clickhouse.tech/docs/en/sql-reference/functions/#strong-typing
Strong Typing
In contrast to standard SQL, ClickHouse has strong typing. In other words, it doesn’t make implicit conversions between types. Each function works for a specific set of types. This means that sometimes you need to use type conversion functions.
https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#tofixedstrings-n
select * from (select 'x' B ) where B in (select toFixedString('x',1))
DB::Exception: Types of column 1 in section IN don't match: String on the left, FixedString(1) on the right.
use casting toString or toFixedString
select * from (select 'x' B ) where toFixedString(B,1) in (select toFixedString('x',1))
Related
I am creating a recursive CTE in snowflake for getting complete path an getting following error:
String 'AAAA_50>BBBB_47>CCCC_92' is too long and would be truncated in 'CONCAT'
My script is as follows: (it works fine for 2 levels, starts failing for 3rd level)
with recursive plant
(child_col,parent_col,val )
as
(
select child_col, '' parent_col , trim(child_col) from My_view
where condition1 = 'AAA'
union all
select A.child_col,A.parent_col,
concat(trim(A.child_col),'>')||trim(val)
from My_view A
JOIN plant as B ON trim(B.child_col) = trim(A.parent_col)
)
select distinct * from plant
Most likely the child_col data type is defined as VARCHAR (N), this type is being passed on. Because CONCAT Returns:
The data type of the returned value is the same as the data type of
the input value(s).
Try to explicitly cast a type to a string like this cast(trim(child_col) as string):
Full code:
with recursive plant (child_col,parent_col,val )
as (
select child_col, '' parent_col , cast(trim(child_col) as string)
from My_view
where condition1 = 'AAA'
union all
select A.child_col, A.parent_col, concat(trim(A.child_col),'>')||trim(val)
from My_view A
join plant as B ON trim(B.child_col) = trim(A.parent_col)
)
select distinct * from plant
Remember that recursion in Snowflake is limited to 100 loops by default.
If you want to increase them, you need to contact support.
Reference: CONCAT Troubleshooting a Recursive CTE
I want to send to oracle via ROracle query with bind parameters which inculde range of dates for a date column.
I try to run :
idsample <- 123
strdate <- "TO_DATE('01/02/2017','DD/MM/YYYY')"
enddate <- "TO_DATE('01/05/2017','DD/MM/YYYY')"
res <- dbGetQuery(myconn,"SELECT * FROM MYTABLE WHERE MYID = :1 AND MYDATE BETWEEN :2 AND :3", data=data.frame(MYID =idsample , MYDATE=c(strdate,enddate )))
but I get error :
"bind data does not match bind specification"
I could find no documentation which covers using more than one positional parameter, but if one parameter corresponds to a single column of a data frame, then by this logic three parameters should correspond to three columns:
idsample <- 123
strdate <- "TO_DATE('01/02/2017', 'DD/MM/YYYY')"
enddate <- "TO_DATE('01/05/2017', 'DD/MM/YYYY')"
res <- dbGetQuery(myconn,
paste0("SELECT * FROM MYTABLE WHERE MYID = :1 AND ",
"MYDATE BETWEEN TO_DATE(:2, 'DD/MM/YYYY') AND TO_DATE(:3, 'DD/MM/YYYY')"),
data=data.frame(idsample, strdate, enddate))
Note that there is nothing special about strdate and enddate from the point of view of the API, such that they should be passed as vector.
Edit:
The problem with making TO_DATE a parameter is that it will probably end up being escaped as a string. In other words, with my first approach you would end up with the following in your WHERE clause:
WHERE MYDATE BETWEEN
'TO_DATE('01/02/2017','DD/MM/YYYY')' AND 'TO_DATE('01/05/2017','DD/MM/YYYY')'
In other words, the TO_DATE function calls ends up being a string. Instead, bind the date strings only.
I'm having trouble passing NULL as an INSERT parameter query using RPostgres and RPostgreSQL:
In PostgreSQL:
create table foo (ival int, tval text, bval bytea);
In R:
This works:
res <- dbSendQuery(con, "INSERT INTO foo VALUES($1, $2, $3)",
params=list(ival=1,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
But this throws an error:
res <- dbSendQuery(con, "INSERT INTO foo VALUES($1, $2, $3)",
params=list(ival=NULL,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
Using RPostgres, the error message is:
Error: expecting a string
Under RPostgreSQL, the error is:
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: invalid input
syntax for integer: "NULL"
)
Substituting NA would be fine with me, but it isn't a work-around - a literal 'NA' gets written to the database.
Using e.g. integer(0) gives the same "expecting a string" message.
You can use NULLIF directly in your insert statement:
res <- dbSendQuery(con, "INSERT INTO foo VALUES(NULLIF($1, 'NULL')::integer, $2, $3)",
params=list(ival=NULL,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
works with NA as well.
One option here to workaround the problem of not knowing how to articulate a NULL value in R which the PostgresSQL pacakge will be able to successfully translate is to simply not specify the column whose value you want to be NULL in the database.
So in your example you could use this:
res <- dbSendQuery(con, "INSERT INTO foo (col2, col3) VALUES($1, $2)",
params=list(tval = 'not quite null',
bval = charToRaw('asdf')
)
)
when you want col1 to have a NULL value. This of course assumes that col1 in your table is nullable, which may not be the case.
Thanks all for the help. Tim's answer is a good one, and I used it to catch the integer values. I went a different route for the rest of it, writing a function in PostgreSQL to handle most of this. It looks roughly like:
CREATE OR REPLACE FUNCTION add_stuff(ii integer, tt text, bb bytea)
RETURNS integer
AS
$$
DECLARE
bb_comp bytea;
rows integer;
BEGIN
bb_comp = convert_to('NA', 'UTF8'); -- my database is in UTF8.
-- front-end catches ii is NA; RPostgres blows up
-- trying to convert 'NA' to integer.
tt = nullif(tt, 'NA');
bb = nullif(bb, bb_comp);
INSERT INTO foo VALUES (ii, tt, bb);
GET DIAGNOSTICS rows = ROW_COUNT;
RETURN rows;
END;
$$
LANGUAGE plpgsql VOLATILE;
Now to have a look at the RPostgres source and see if there's an easy-enough way to make it handle NULL / NA a bit more easily. Hoping that it's missing because nobody thought of it, not because it's super-tricky. :)
This will give the "wrong" answer if someone is trying to put literally 'NA' into the database and mean something other than NULL / NA (e.g. NA = "North America"); given our use case, that seems very unlikely. We'll see in six months time.
I am having a variable x which contains 20000 IDs. I want to write a sql query like,
select * from tablename where ID in x;
I am trying to implement this in R where I can get the values only for IDs in x variable. The following is my try,
dbSendQuery(mydb, "select * from tablename where ID in ('$x') ")
I am not getting any error while trying this. But it is returning 0 values.
Next tried using
sprintf("select * from tablename where ID in %s",x)
But this is creating 20000 individual queries which could prove costly in DB.
Can anybody suggest me a way to write a command, which would loop through IDs in x and save to a Dataframe in R in a single query?
You need to have the codes in the actual string. Here is how I would do it with gsub
x <- LETTERS[1:3]
sql <- "select * from tablename where ID in X_ID_CODES "
x_codes <- paste0("('", paste(x, collapse="','"), "')")
sql <- gsub("X_ID_CODES", x_codes, sql)
# see new output
cat(sql)
select * from tablename where ID in ('A','B','C')
# then submit the query
#dbSendQuery(mydb, sql)
How about pasting it:
dbSendQuery(mydb, paste("select * from tablename where ID in (", paste(x, collapse = ","), ")"))
I'm trying to get to make a query from R to a MySQL database. The query iterates over a list, and therefore changes dynamically. Each query based on each element in the list will also in general result in multiple rows being extract. The database I'm using can be downloaded from here: http://www.ghtorrent.org/msr14.html
In the end of the day all the results should end up in the same output, and look like this:
pull_req_id,user,action,created_at
12359,arthurnn,opened,1380126837
12359,rafaelfranca,discussed,1380127245
12359,arthurnn,discussed,1380127676
...
The code that I have now looks like this, but it's not working:
library(DBI)
library(RMySQL)
m <- dbDriver("MySQL");
con <- dbConnect(m, user='msr14', password='msr14', host='localhost', dbname='msr14');
all_rails_projects <- dbGetQuery(con, 'SELECT * FROM projects WHERE name = "rails";')
all_rails_prs <- dbGetQuery(con, 'SELECT id FROM pull_requests WHERE base_repo_id = 78852;')
out <- nrow(all_rails_prs)
names(out) <- as.list(all_rails_prs)
df <- c('pull_req_id', 'user', 'action', 'created_at')
out <- numeric(length(df))
names(out) <- df
for (i in nrow(all_rails_prs)) {
SQL <- paste("select user, action, created_at from
(
select prh.action as action, prh.created_at as created_at, u.login as user
from pull_request_history prh, users u
where prh.pull_request_id ='", all_rails_prs[i,], "'",
" and prh.actor_id = u.id
union
select ie.action as action, ie.created_at as created_at, u.login as user
from issues i, issue_events ie, users u
where ie.issue_id = i.id
and i.pull_request_id ='", all_rails_prs[i,], "'",
" and ie.actor_id = u.id
union
select 'discussed' as action, ic.created_at as created_at, u.login as user
from issues i, issue_comments ic, users u
where ic.issue_id = i.id
and u.id = ic.user_id
and i.pull_request_id ='", all_rails_prs[i,], "'",
"union
select 'reviewed' as action, prc.created_at as created_at, u.login as user
from pull_request_comments prc, users u
where prc.user_id = u.id
and prc.pull_request_id ='", all_rails_prs[i,], "'",
") as actions
order by created_at;", sep = "")
res <- dbGetQuery(con, SQL)
out[i] <- dbFetch(res, n = -1)
}
This generates the following error message:
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘dbFetch’ for signature ‘"data.frame"’
In addition: Warning message:
In mysqlExecStatement(conn, statement, ...) :
RS-DBI driver warning: (unrecognized MySQL field type 7 in column 2 imported as character)
I've tried different variants, but they all result in some kind of error, so it seems as if I'm simply not setting up the query structure the right way. Anyone has any advice?
According to the docs, dbGetQuery calls fetch by default if the query is successful.
res is already a database and you can put it into out directly without having to fetch the records.
Also, if you want to store the results in a dataframe and not a list, you might want to try:
#get the results
res<-dbGetQuery(con, SQL)
#if it's not null, add the request id and rbind it to the out dataframe
if(!is.null(res)){
out<-rbind(out,cbind(rep(all_rails_prs[i,],nrow(res)),res))
}
There might also be an error in your for syntax, you might need for (i in 1:nrow(all_rails_prs))