I have a tasks application with two tables. One table has the task name, date, owner etc and the other has the comments for the task linked to the task number so there can be multiple comments attached to a single task.
Both tables have FTS5 indexes. Within my app I want to search both tables for a word and present the rows to the user. I have the below working for each table individually but how do I construct a query that returns data from both FTS5 tables?
(python3.6)
c.execute("select * from task_list where task_list = ? ", [new_search])
c.execute("select * from comments where comments = ? ", [new_search])
thanks #tomalak never thought of doing that, was focused on the query. Here's what I came up with and works for my purposes. Probably better ways to achieve the same result but I'm a beginner. This is a Tkinter app.
def db_search():
rows = ''
conn = sqlite3.connect('task_list_database.db')
c = conn.cursor()
d = conn.cursor()
new_search = entry7.get()
c.execute("select * from task_list where task_list = ? ", [new_search])
d.execute("select * from comments where comments = ? ", [new_search])
rows1 = c.fetchall()
rows2 = d.fetchall()
rows = rows1 + rows2
clear_tree(tree)
for row in rows:
tree.insert("", END, values=row)
conn.close()
Related
I have read-only access to a Postgres database. I can not write to the database.
Q. Is there a way to construct and run a SQL query where I join a data frame (or other R object) to a table in a read-only Postgres database?
This is for accessing data from WRDS, https://wrds-www.wharton.upenn.edu/
Here's an attempt at pseudocode
#establish a connection to a database
con <- dbConnect( Postgres(),
host = 'host.org',
port = 1234,
dbname = 'db_name',
sslmode = 'require',
user = 'username', password = 'password')
#create an R dataframe (or other object)
df <- data.frame( customer_id = c('a123', 'a-345', 'b0') )
#write a sql query we will run
sql_query <- "
SELECT t.customer_id, t.* FROM df t
LEFT JOIN table_name df
on t.customer_id = df.customer_id
"
my_query_results <- dbSendQuery(con, sql_query)
temp <- dbFetch(res, n = 1)
dbClearResult(res)
my_query_results
Note and edit: The example query I provided is intentionally super simple for example purposes.
In my actual queries, there might be 3 or more columns I want to join on, and millions of rows I want to join on.
Use the copy_inline function from the dbplyr package, which was added following an issue filed on this topic. See also the question here.
An example of its use is found here.
If your join is on a single condition, it can be rewritten using an in clause:
In SQL:
SELECT customer_id
FROM table_name
WHERE customer_id in ('a123', 'a-345', 'b0')
Programmatically from R:
sql_query = sprintf(
"SELECT customer_id
FROM table_name
WHERE customer_id in (%s)",
paste(sQuote(df$customer_id, q = FALSE), collapse = ", ")
)
I am just starting to work with openedge and I need to join information from two tables but I just need the first row from the second one.
Basically I need to do a typical SQL Cross Apply but in progress. I look in the documentation and the Statement FETCH FIRST 10 ROWS ONLY only in OpenEdge 11.
My query is:
SELECT * FROM la_of PUB.la_ofart ON la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex AND la_of.Num_partida = la_ofart.Num_partida
CROSS APPLY (
SELECT TOP 1 ofart.Cod_Ordf AS Cod_Ordf_ofart ,
ofart.Num_ordex AS Num_ordex_ofart
FROM la_ofart AS ofart
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND la_ofart.doc1_num = ofart.doc1_num
AND la_ofart.doc2_linha = ofart.doc2_linha
ORDER BY ofart.Cod_Ordf DESC) ofart
I am using SSMS to extract data from OE10 using an ODBC connector and querying to OE using OpenQuery.
Thanks for all help.
If I correctly understood your question, maybe you can use something like this. Maybe this isn't the best solution for your problem, but may suit your needs.
DEF BUFFER ofart FOR la_ofart.
DEF TEMP-TABLE tt-ofart NO-UNDO LIKE ofart
FIELD seq AS INT
INDEX ch-seq
seq.
DEF VAR i-count AS INT NO-UNDO.
EMPTY TEMP-TABLE tt-ofart.
blk:
FOR EACH la_ofart NO-LOCK,
EACH la_of NO-LOCK
WHERE la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex
AND la_of.Num_partida = la_ofart.Num_partida,
EACH ofart NO-LOCK
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND ofart.doc1_num = la_ofart.doc1_num
AND ofart.doc2_linha = la_ofart.doc2_linha
BREAK BY ofart.Cod_Ordf DESCENDING:
ASSIGN i-count = i-count + 1.
CREATE tt-ofart.
BUFFER-COPY ofart TO tt-ofart
ASSIGN ofart.seq = i-count.
IF i-count >= 10 THEN
LEAVE blk.
END.
FOR EACH tt-ofart USE-INDEX seq:
DISP tt-ofart WITH SCROLLABLE 1 COL 1 DOWN NO-ERROR.
END.
I'm using BigQuery on exported GA data (see schema here)
Looking at the documentation, I see that when I selected a field that is inside a record it will automatically flatten that record and duplicate the surrounding columns.
So I tried to create a denormalized table that I could query in a more SQL like mindset
SELECT
CONCAT( date, " ", if (hits.hour < 10,
CONCAT("0", STRING(hits.hour)),
STRING(hits.hour)), ":", IF(hits.minute < 10, CONCAT("0", STRING(hits.minute)), STRING(hits.minute)) ) AS hits.date__STRING,
CONCAT(fullVisitorId, STRING(visitId)) AS session_id__STRING,
fullVisitorId AS google_identity__STRING,
MAX(IF(hits.customDimensions.index=7, hits.customDimensions.value,NULL)) WITHIN RECORD AS customer_id__LONG,
hits.hitNumber AS hit_number__INT,
hits.type AS hit_type__STRING,
hits.isInteraction AS hit_is_interaction__BOOLEAN,
hits.isEntrance AS hit_is_entrance__BOOLEAN,
hits.isExit AS hit_is_exit__BOOLEAN,
hits.promotion.promoId AS promotion_id__STRING,
hits.promotion.promoName AS promotion_name__STRING,
hits.promotion.promoCreative AS promotion_creative__STRING,
hits.promotion.promoPosition AS promotion_position__STRING,
hits.eventInfo.eventCategory AS event_category__STRING,
hits.eventInfo.eventAction AS event_action__STRING,
hits.eventInfo.eventLabel AS event_label__STRING,
hits.eventInfo.eventValue AS event_value__INT,
device.language AS device_language__STRING,
device.screenResolution AS device_resolution__STRING,
device.deviceCategory AS device_category__STRING,
device.operatingSystem AS device_os__STRING,
geoNetwork.country AS geo_country__STRING,
geoNetwork.region AS geo_region__STRING,
hits.page.searchKeyword AS hit_search_keyword__STRING,
hits.page.searchCategory AS hits_search_category__STRING,
hits.page.pageTitle AS hits_page_title__STRING,
hits.page.pagePath AS page_path__STRING,
hits.page.hostname AS page_hostname__STRING,
hits.eCommerceAction.action_type AS commerce_action_type__INT,
hits.eCommerceAction.step AS commerce_action_step__INT,
hits.eCommerceAction.option AS commerce_action_option__STRING,
hits.product.productSKU AS product_sku__STRING,
hits.product.v2ProductName AS product_name__STRING,
hits.product.productRevenue AS product_revenue__INT,
hits.product.productPrice AS product_price__INT,
hits.product.productQuantity AS product_quantity__INT,
hits.product.productRefundAmount AS hits.product.product_refund_amount__INT,
hits.product.v2ProductCategory AS product_category__STRING,
hits.transaction.transactionId AS transaction_id__STRING,
hits.transaction.transactionCoupon AS transaction_coupon__STRING,
hits.transaction.transactionRevenue AS transaction_revenue__INT,
hits.transaction.transactionTax AS transaction_tax__INT,
hits.transaction.transactionShipping AS transaction_shipping__INT,
hits.transaction.affiliation AS transaction_affiliation__STRING,
hits.appInfo.screenName AS app_current_name__STRING,
hits.appInfo.screenDepth AS app_screen_depth__INT,
hits.appInfo.landingScreenName AS app_landing_screen__STRING,
hits.appInfo.exitScreenName AS app_exit_screen__STRING,
hits.exceptionInfo.description AS exception_description__STRING,
hits.exceptionInfo.isFatal AS exception_is_fatal__BOOLEAN
FROM
[98513938.ga_sessions_20151112]
HAVING
customer_id__LONG IS NOT NULL
AND customer_id__LONG != 'NA'
AND customer_id__LONG != ''
I wrote the result of this table into another table denorm (flatten on, large data set on).
I get different results when I query denorm with the clause
WHERE session_id_STRING = "100001897901013346771447300813"
versus wrapping the above query in (which yields desired results)
SELECT * FROM (_above query_) as foo where session_id_STRING = 100001897901013346771447300813
I'm sure this is by design, but if someone could explain the difference between these two methods that would be very helpful?
I believe you are saying that you did check the box "Flatten Results" when you created the output table? And I assume from your question that session_id_STRING is a repeated field?
If those are correct assumptions, then what you are seeing is exactly the behavior you referenced from the documentation above. You asked BigQuery to "flatten results" so it turned your repeated field into an un-repeated field and duplicated all the fields around it so that you have a flat (i.e., no repeated data) table.
If the desired behavior is the one you see when querying over the subquery, then you should uncheck that box when creating your table.
Looking at the documentation, I see that when I selected a field that
is inside a record it will automatically flatten that record and
duplicate the surrounding columns.
This is not correct. BTW, can you please point to the documentation - it needs to be improved.
Selecting a field does not flatten that record. So if you have a table T with a single record {a = 1, b = (2, 2, 3)}, then do
SELECT * FROM T WHERE b = 2
You still get a single record {a = 1, b = (2, 2)}. SELECT COUNT(a) from this subquery would return 1.
But once you write results of this query with flatten=on, you get two records: {a = 1, b = 2}, {a = 1, b = 2}. SELECT COUNT(a) from the flattened table would return 2.
I would like to execute a fairly complex SQL statement using SQLite.swift and get the result preferably in an array to use as a data source for a tableview. The statement looks like this:
SELECT defindex, AVG(price) FROM prices WHERE quality = 5 AND price_index != 0 GROUP BY defindex ORDER BY AVG(price) DESC
I was studying the SQLite.swift documentation to ind out how to do it properly, but I couldn't find a way. I could call prepare on the database and iterate through the Statement object, but that wouldn't be optimal performance wise.
Any help would be appreciated.
Most sequences in Swift can be unpacked into an array by simply wrapping the sequence itself in an array:
let stmt = db.prepare(
"SELECT defindex, AVG(price) FROM prices " +
"WHERE quality = 5 AND price_index != 0 " +
"GROUP BY defindex " +
"ORDER BY AVG(price) DESC"
)
let rows = Array(stmt)
Building a data source from this should be relatively straightforward at this point.
If you use the type-safe API, it would look like this:
let query = prices.select(defindex, average(price))
.filter(quality == 5 && price_index != 0)
.group(defindex)
.order(average(price).desc)
let rows = Array(query)
How can I get a single row result (e.g. in form of a table/array) back from a sql statement. Using Lua Sqlite (LuaSQLite3). For example this one:
SELECT * FROM sqlite_master WHERE name ='myTable';
So far I note:
using "nrows"/"rows" it gives an iterator back
using "exec" it doesn't seem to give a result back(?)
Specific questions are then:
Q1 - How to get a single row (say first row) result back?
Q2 - How to get row count? (e.g. num_rows_returned = db:XXXX(sql))
In order to get a single row use the db:first_row method. Like so.
row = db:first_row("SELECT `id` FROM `table`")
print(row.id)
In order to get the row count use the SQL COUNT statement. Like so.
row = db:first_row("SELECT COUNT(`id`) AS count FROM `table`")
print(row.count)
EDIT: Ah, sorry for that. Here are some methods that should work.
You can also use db:nrows. Like so.
rows = db:nrows("SELECT `id` FROM `table`")
row = rows[1]
print(row.id)
We can also modify this to get the number of rows.
rows = db:nrows("SELECT COUNT(`id`) AS count FROM `table`")
row = rows[1]
print(row.count)
Here is a demo of getting the returned count:
> require "lsqlite3"
> db = sqlite3.open":memory:"
> db:exec "create table foo (x,y,z);"
> for x in db:urows "select count(*) from foo" do print(x) end
0
> db:exec "insert into foo values (10,11,12);"
> for x in db:urows "select count(*) from foo" do print(x) end
1
>
Just loop over the iterator you get back from the rows or whichever function you use. Except you put a break at the end, so you only iterate once.
Getting the count is all about using SQL. You compute it with the SELECT statement:
SELECT count(*) FROM ...
This will return one row containing a single value: the number of rows in the query.
This is similar to what I'm using in my project and works well for me.
local query = "SELECT content FROM playerData WHERE name = 'myTable' LIMIT 1"
local queryResultTable = {}
local queryFunction = function(userData, numberOfColumns, columnValues, columnTitles)
for i = 1, numberOfColumns do
queryResultTable[columnTitles[i]] = columnValues[i]
end
end
db:exec(query, queryFunction)
for k,v in pairs(queryResultTable) do
print(k,v)
end
You can even concatenate values into the query to place inside a generic method/function.
local query = "SELECT * FROM ZQuestionTable WHERE ConceptNumber = "..conceptNumber.." AND QuestionNumber = "..questionNumber.." LIMIT 1"