I want to have a parameters on columns to select in sql query:
select ? from my_table
I tried it with glue_sql:
glue::glue_sql(con, "select {x} from my_table", x=noquotes("mycolumn"))
but the result is:
select 'mycolumn' from my_table
instead of:
select mycolumn from my_table
any ideas?
You are trying to either not quote the identifier or quote it correctly (with double quotes). The help states:
They automatically quote character results, quote identifiers if the glue expression is surrounded by backticks
Try:
glue::glue_sql(.con=con, "select {`x`} from my_table", x="mycolumn")
# <SQL> select "mycolumn" from my_table
If other readers wonder why single quotes are bad, the single-quotes create it as a string-literal, meaning it will be returned as data, not as a column header. For instance, using some table with an Id field:
DBI::dbGetQuery(con, "select 'Id' from sometable limit 3")
#
# 1 Id
# 2 Id
# 3 Id
(Notice no column header, a proper query might have named the string literal with select 'Id' as somecolumnname ..., but at that point it becomes quite clear why single quotes are not right.)
DBI::dbGetQuery(con, 'select "Id" from sometable limit 3')
# Id
# 1 03E33A23-3F2C-1234-5678-90ABCDEF1234
# 2 04E33A23-3F2C-1234-5678-90ABCDEF1234
# 3 8114F80C-624D-1234-5678-90ABCDEF1234
You need something like this
library(DBI)
library(tidyverse)
x="mycolumn"
con%>%dbGetQuery(glue::glue("select {x} from my_table"))
Related
I need to submit the following query through my R code:
select t."date"
from db.table t
where t."date" > '2016-01-01';
To generate this quoted string properly, I tried:
sqlquery <- dbQuoteString(ANSI()
, paste0("select t.", '"', "date", '"',"
from db.table t
where t.", '"',"date", '"'," > '2016-01-01';")
)
Which output is:
<SQL> 'select t."date"
from db.table t
where t."date" > ''2016-01-01'';'
So I can use:
data <- DBI::dbGetQuery(con, sqlquery)
However, there are double '' instead of a single one ' (around 2016-01-01).
How do I overcome that?
Several layers to this onion.
If you need to quote something, use dQuote or sQuote, they handle both the beginning and end for you. For instance,
dQuote("date")
# [1] "\"date\""
The use of dbQuoteString is quoting your whole query as if it is a string literal. Note the ' before select, indicating that it is a string literal surrounding in single quotes (a common SQL way of delineating string literals). You could also just as easily written
dbQuoteString(ANSI(), "Four score and seven years ago ... said by \"Lincoln\" in '1863'")
# <SQL> 'Four score and seven years ago ... said by "Lincoln" in ''1863'''
The reason you see '' is because it is trying to escape the single quotes that SQL uses to surround string literals. This produces a string, not something that can be executed as a query. In fact, dbQuoteString is something you should be using for your original query, instead of literal quotes and/or my dquote in point 1:
DBI::SQL(paste("select t.", DBI::dbQuoteIdentifier(DBI::ANSI(), "date"), "from db.table t where t.", DBI::dbQuoteIdentifier(DBI::ANSI(), "date"), ">", DBI::dbQuoteString(DBI::ANSI(), "2016-01-01")))
# <SQL> select t. "date" from db.table t where t. "date" > '2016-01-01'
(Admittedly, DBI::SQL is not strictly necessary here, it's really just a character string with a wrapper that makes it print more prettily on the R console.)
Consider not manually encoding data and such into your queries, minor mistakes break queries (or possibly worse, though unlikely in the general case). I strongly urge you to read through https://solutions.rstudio.com/db/best-practices/run-queries-safely/. There, they identify the use of dbQuote* functions as "Manual escaping", which is the last method recommended (least-preferred while still somewhat safe). Other options include:
Parameterized queries, using ? placeholders for data in the query.
ret <- DBI::dbGetQuery(con, 'select t."date" from db.table where "date" > ?', params = list("2016-01-01"))
Use glue::glue_sql:
mydate <- "2016-01-01"
sql <- glue::glue_sql(.con = con, 'select t."date" from db.table where "date" > {mydate}')
sql
# <SQL> select t."date" from db.table where "date" > '2016-01-01'
ret <- DBI::dbGetQuery(con, sql)
Use DBI::sqlInterpolate:
sql <- DBI::sqlInterpolate(con, 'select t."date" from db.table where "date" > ?mydate', mydate = "2016-01-01")
sql
# <SQL> select t."date" from db.table where "date" > '2016-01-01'
ret <- DBI::dbGetQuery(con, sql)
Let's suppose I have data like
column
ABC
ABC PQR
ABC (B21)
XYZ ABC
and I wanted output as first string i.e.
ABC
XYZ
i.e. group by column
but I could not able to remove string after space.
I believe that the following would do what you want :-
SELECT * FROM mytable GROUP BY CASE WHEN instr(mycolumn,' ') > 0 THEN substr(mycolumn,1,instr(mycolumn,' ')-1) ELSE mycolumn END;
obviously table and column name changed appropriately.
As an example, using your data plus other data to demonstrate, the following :-
DROP TABLE IF EXISTS mytable;
CREATE TABLE IF NOT EXISTS mytable (mycolumn);
INSERT INTO mytable VALUES ('ABC'),('ABC PQR'),('ABC (B21)'),('XYZ'),('A B'),('AAAAAAAAAAAAAAAAAAAAAAAA B'),(' ABC'),(' XZY');
SELECT * FROM mytable;
SELECT *,group_concat(mycolumn) FROM mytable GROUP BY CASE WHEN instr(mycolumn,' ') > 0 THEN substr(mycolumn,1,instr(mycolumn,' ')-1) ELSE mycolumn END;
DROP TABLE IF EXISTS mytable;
group_concat added to show the columns included in each group
Produces:-
The ungrouped table (first SELECT):-
The grouped result (plus group_concat column) :-
the first row being grouped due to the first character being a space in ABC and XZY
You don't want to do any aggregation, so there is no need for a GROUP BY clause.
Use string functions like SUBSTR() and INSTR() to get the 1st word of each string and then use DISTINCT to remove duplicates from the results:
SELECT DISTINCT SUBSTR(columnname, 1, INSTR(columnname || ' ', ' ') - 1) new_column
FROM tablename
See the demo.
Results:
new_column
ABC
XYZ
I have a string like this ('car, bus, train')
I want to convert it to be used in an in-clause. Basically I want to convert it to
('car','bus','train'). Please how do I do this in Teradata
I don't know how you are getting data like that, but if you have no control over that, you can use STRTOK_SPLIT_TO_TABLE.
select t.* from table (strtok_split_to_table(1,'car, bus, train',',')
returns (outkey integer,tokennum integer,resultstring varchar(25))) as t
Run by itself, that gives you:
outkey tokennum resultstring
1 1 car
1 2 bus
1 3 train
You can use that as a derived table and join it to the table you want to filter by. Something like:
select
<your table>.*
from
<your table>
inner join (select t.* from table (strtok_split_to_table(1,'car, bus, train',',')
returns (outkey integer,tokennum integer,resultstring varchar(25))) as t) dt
on yourtable.yourcolumn = dt.resultstring
here is the another way of spliting the input for n number of commas and use IN clause.
SELECT regexp_substr('car,bus,train','[^,]+',1,day_of_calendar) fields
FROM sys_calendar.calendar
WHERE day_of_calendar <= (CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1;
Output of the Query
fields
~~~~~~~~
bus
car
train
Here is the systax to use in where clause
SELECT * FROM <your table>
WHERE yourtable.requiredColumn in
(
SELECT regexp_substr('car,bus,train','[^,]+',1,day_of_calendar) fields
FROM sys_calendar.calendar
WHERE
day_of_calendar <= (CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1
);
Basically what we are doing here is splitting the string for each comma and below function is calculating number of commas in the string
(CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1
I want to sort semicolon separated values per row in a column. Eg.
Input:
abc;pqr;def;mno
xyz;pqr;abc
abc
xyz;jkl
Output:
abc;def;mno;pqr
abc;pqr;xyz
abc
jkl;xyz
Can anyone help?
Perhaps something like this. Breaking it down:
First we need to break up the strings into their component tokens, and then reassemble them, using LISTAGG(), while ordering them alphabetically.
There are many ways to break up a symbol-separated string. Here I demonstrate the use of a hierarchical query. It requires that the input strings be uniquely distinguished from each other. Since the exact same semicolon-separated string may appear more than once, and since there is no info from the OP about any other unique column in the table, I create a unique identifier (using ROW_NUMBER()) in the most deeply nested subquery. Then I run the hierarchical query to break up the inputs and then reassemble them in the outermost SELECT.
with
test_data as (
select 'abc;pqr;def;mno' as str from dual union all
select 'xyz;pqr;abc' from dual union all
select 'abc' from dual union all
select 'xyz;jkl' from dual
)
-- End of test data (not part of the solution!)
-- SQL query begins BELOW THIS LINE.
select str,
listagg(token, ';') within group (order by token) as sorted_str
from (
select rn, str,
regexp_substr(str, '([^;]*)(;|$)', 1, level, null, 1) as token
from (
select str, row_number() over (order by null) as rn
from test_data
)
connect by level <= length(str) - length(replace(str, ';')) + 1
and prior rn = rn
and prior sys_guid() is not null
)
group by rn, str
;
STR SORTED_STR
--------------- ---------------
abc;pqr;def;mno abc;def;mno;pqr
xyz;pqr;abc abc;pqr;xyz
abc abc
xyz;jkl jkl;xyz
4 rows selected.
I'm searching for Multiple text in multiple column of Virtual Table. I have checked this thread, this search for a single word in multiple column.
I checked with following
SELECT * FROM table WHERE table MATCH (('A:cat OR C:cat') AND ('A:dog OR C:dog')
but it seems AND condition not working.
EDIT I have tried with following,
Select count (*) FROM Table1 WHERE TBL_VIRTUAL MATCH (('A:D* AND B:D* AND C:D*') OR ('A:tar* AND B:tar* AND C:tar*'));
Select count (*) FROM Table1 WHERE TBL_VIRTUAL MATCH (('A:D* AND B:D* AND C:D*') AND ('A:tar* AND B:tar* AND C:tar*'));
These both query return me same 109 result. Then I tried what #redneb mention in below answer:
SELECT * FROM table WHERE table MATCH '(A:D* OR B:D* OR C: D*) AND (A:tar* OR B:tar* OR C:tar*)'
SELECT * FROM table WHERE table MATCH '(A:D* OR B:D* OR C: D*) OR (A:tar* OR B:tar* OR C:tar*)'
But this return 0 result.
Any suggestion what I'm missing here!!
Try this instead:
SELECT *
FROM mytable
WHERE mytable MATCH '(A:cat OR C:cat) AND (A:dog OR C:dog)';
However, I suspect that the following query will perform faster:
SELECT *
FROM mytable
WHERE mytable MATCH '(A:cat AND C:dog) OR (A:dog AND C:cat)';
and is equivalent to the first one.
Edit: Here's a complete example. Let's create and populate a table first:
CREATE VIRTUAL TABLE mytable USING fts3(A, C);
INSERT INTO mytable VALUES
('foo','bar'),
('dog','dog'),
('cat','cat'),
('dog','cat'),
('cat','dog');
Then the query works as expected:
sqlite> SELECT * FROM mytable WHERE mytable MATCH '(A:cat AND C:dog) OR (A:dog AND C:cat)';
A C
---------- ----------
dog cat
cat dog
For OR condition type OR between i.e. : MATCH ('A:cat OR C:cat')
For AND condition just don't type anything i.e. : MATCH ('A:cat C:cat')