Filtering data based on columns in the result in hasura query - hasura

I have two tables A and B.
A [ a_id, a_num]
B [ b_id, b_num, a_id ]
How can we write a single hasura query to fetch rows from B where b_num < a_num joining the table based on A.a_id = B.a_id?

What you essentially want is to compare two columns in a where-clause. That is not supported by hasura at the moment.
See this issue, which has been closed: https://github.com/hasura/graphql-engine/issues/1387
They suggest you to create a generated column, a view, or a native function. That does this for you.
Imo, creating a view that provides only A and B combinations where b_num is smaller than a_num is best suited for your usecase.
Here is an example on how to create a view, which is called filtered_a_b_combos:
CREATE OR REPLACE VIEW filtered_a_b_combos AS (
SELECT A.a_id, B.b_id
FROM A
JOIN B ON A.a_id = B.a_id
WHERE B.b_num < A.a_num
)

Related

Recursive SQLite CTE with JSON1 json_each

I have a SQLite table where one column contains a JSON array containing 0 or more values. Something like this:
id|values
0 |[1,2,3]
1 |[]
2 |[2,3,4]
3 |[2]
What I want to do is "unfold" this into a list of all distinct values contained within the arrays of that column.
To start, I am using the JSON1 extension's json_each function to extract a table of values from a row:
SELECT
value
FROM
json_each(
(
SELECT
values
FROM
my_table
WHERE
id == 2
)
)
Where I can vary the id (2, above) to select any row in the table.
Now, I am trying to wrap this in a recursive CTE so that I can apply it to each row across the entire table and union the results. As a first step I replicated (roughly) the results from above as follows:
WITH RECURSIVE result AS (
SELECT null
UNION ALL
SELECT
value
FROM
json_each(
(
SELECT
values
FROM
my_table
WHERE
id == 2
)
)
)
SELECT * FROM result;
As the next step I had originally planned to make id a variable and increment it (in a similar manner to the first example in the documentation, but haven't been able to get that to work.
I have gone through the other examples in the documentation, but they are somewhat more complex and I haven't been able to distill those down to see how they might apply to this problem.
Can someone provide a simple example of how to solve this (or a similar problem) with a recursive CTE?
Of course, my goal is to solve the problem with or without CTEs so Im also happy to hear if there is a better way...
You do not need a recursive CTE for this.
To call json_each for multiple source rows, use a join:
SELECT t1.id, t2.value
FROM my_table AS t1
JOIN json_each((SELECT "values" FROM my_table WHERE id = t1.id)) AS t2;

How to pull column names from multiple tables using R

Sorry in advance due to being new to Rstudio...
There are two parts to this question:
1) I have a large database that has almost 6,000 tables in it. Many of these tables have no data in them. Is there a code using R to only pull a list of tables names that have data in them?
I know how to pull a list of all table names and how to pull specific table data using the code below..
test<-odbcDriverConnect('driver={SQL Server};server=(SERVER);database=(DB_Name);trusted_connection=true')
rest<-sqlQuery(test,'select*from information_schema.tables')
Table1<-sqlFetch(test, "PROPERTY")
Above is the code I use to access the database and tables.
"test" is the connection
"rest" shows the list of 5,803 tables names.. one of which is called "PROPERTY"
"Table1" is simply pulling one of the tables named "PROPERTY".
I am looking to make "rest" only show the data tables that have data in them.
2) My ultimate goal, which leads to the second question, is to create a table that shows a list of every table from this database in column#1 and then column 2,3,4,etc... would include every one of the column headers that is contained in each table. Any idea how do to that?
Thanks so much!
The Tables object below returns a data frame giving all of the tables in the database and how many rows are in each table. As a condition, it requires that any table selected have at least one record. This is probably the fastest way to get your list of non-empty tables. I pulled the query to get that information from https://stackoverflow.com/a/14163881/1017276
My only reservation about that query is that it doesn't give the schema name, and it is possible to have tables with the same name in different schemas. So this is likely only going to work well within one schema at a time.
library(RODBCext)
Tables <-
sqlExecute(
channel = test,
query = "SELECT T.name TableName, I.rows Records
FROM sysobjects t, sysindexes i
WHERE T.xtype = ? AND I.id = T.id AND I.indid IN (0,1) AND I.rows > 0
ORDER BY TableName;",
data = list(xtype = "U"),
fetch = TRUE,
stringsAsFactors = FALSE
)
This next part uses the tables you found above and then gets the column information from each of those tables. Lastly, it makes on single data frame with all of the column names.
Columns <-
lapply(Tables$TableName,
function(x) sqlColumns(test, x))
Columns <- do.call("rbind", Columns)
sqlColumns is a function in RODBC.
sqlExecute is a function in RODBCext that allows for parameterized queries. I tend to use that anytime I need to use quoted strings in a query.

Merge existing records in neo4j, remove duplicates, keep optional relationships

This is similar to Merge existing records in neo4j, remove duplicates, keep relationships, except that the nodes I want to merge have 0-2 relationships I want to keep.
Take the graph generated by:
create (:Person {name:"Bob"})-[:RELATED_TO]->(:Person {name:"Jane"})-[:FRIENDS_WITH]->(:Person {name:"Tim"})<-[:FRIENDS_WITH]-(:Person {name:"Jane"}),
(:Person {name:"Sally"})-[:RELATED_TO]->(:Person {name:"Jane"})
I want to merge the duplicate Jane nodes, preserving the RELATED_TO and FRIENDS_WITH relationships, removing the duplicates.
From the other question I can get as far as:
match (p:Person {name:"Jane"})
with p.name as name, collect(p) as ps, count(*) as pcount
where pcount > 1
with head(ps) as first, tail(ps) as rest
unwind rest as to_delete
return to_delete
But I can't seem to get the matches and/or optional matches correct for merging. I tried chaining optional matches and doing the merge in one statement and neo4j gives me a Statement.ExecutionFailure with no additional message. I tried breaking out the merges into each match and ended up with "other node is null". Thoughts?
The following query is working. On a side note, for this kind of refactoring I would love the day when it would be possible to set a relationship type with a dynamic variable :
MATCH (n:Person { name:"Jane" })
WITH collect(n) AS janes
WITH head(janes) AS superJane, tail(janes) AS badJanes
UNWIND badJanes AS badGirl
OPTIONAL MATCH (badGirl)-[r:FRIENDS_WITH]->(other)
OPTIONAL MATCH (badGirl)<-[r2:RELATED_TO]-(other2)
DELETE r, r2, badGirl
WITH superJane, collect(other) AS friends, collect(other2) AS related
FOREACH (x IN friends | MERGE (superJane)-[:FRIENDS_WITH]->(x))
FOREACH (x IN related | MERGE (x)-[:RELATED_TO]->(superJane))
Result :

How can I join and also exclude on two fields in Access?

I need some guidance about making two different but related queries in Access:
Query 1: Table 1 joins on matches in Table 2 using two fields and using OR (i.e. can match on one field or the other).
Query 2: Table 1 joins on non-matches (excludes) in Table 2 using two fields and using OR (i.e. can match on one field or the other)
1: note the parenthesis (you could also do this in join but my preference is in the where statement) This is approximate code, the syntax may be slightly off for Access SQL but it should help point you in the right direction.
WHERE ((table1.fieldA = table2.fieldB
AND table1.fieldA = table2.fieldC) OR
table1.fieldA = table2.fieldD)
2:
FROM table1
LEFT JOIN Table2
ON (table1.fieldA = table2.fieldB
AND table1.fieldA = table2.fieldC)
OR table1.fieldA = table2.fieldD
WHERE (IS NULL table2.fieldB AND
IS NULL table2.fieldC)
OR IS NULL table2.fieldD

SUM totals by FOR ALL ENTRIES itab keys

I want to execute a SELECT query on a database table that has 6 key fields, let's assume they are keyA, keyB, ..., keyF.
As input parameters to my ABAP function module I do receive an internal table with exactly that structure of the key fields, each entry in that internal table therefore corresponds to one tuple in the database table.
Thus I simply need to select all tuples from the database table that correspond to the entries in my internal table.
Furthermore, I want to aggregate an amount column in that database table in exactly the same query.
In pseudo SQL the query would look as follows:
SELECT SUM(amount) FROM table WHERE (keyA, keyB, keyC, keyD, keyE, keyF) IN {internal table}.
However, this representation is not possible in ABAP OpenSQL.
Only one column (such as keyA) is allowed to state, not a composite key. Furthermore I can only use 'selection tables' (those with SIGN, OPTIOn, LOW, HIGH) after they keyword IN.
Using FOR ALL ENTRIES seems feasible, however in this case I cannot use SUM since aggregation is not allowed in the same query.
Any suggestions?
For selecting records for each entry of an internal table, normally the for all entries idiom in ABAP Open SQL is your friend. In your case, you have the additional requirement to aggregate a sum. Unfortunately, the result set of a SELECT statement that works with for all entries is not allowed to use aggregate functions. In my eyes, the best way in this case is to compute the sum from the result set in the ABAP layer. The following example works in my system (note in passing: using the new ABAP language features that came with 7.40, you could considerably shorten the whole code).
report zz_ztmp_test.
start-of-selection.
perform test.
* Database table ZTMP_TEST :
* ID - key field - type CHAR10
* VALUE - no key field - type INT4
* Content: 'A' 10, 'B' 20, 'C' 30, 'D' 40, 'E' 50
types: ty_entries type standard table of ztmp_test.
* ---
form test.
data: lv_sum type i,
lt_result type ty_entries,
lt_keys type ty_entries.
perform fill_keys changing lt_keys.
if lt_keys is not initial.
select * into table lt_result
from ztmp_test
for all entries in lt_keys
where id = lt_keys-id.
endif.
perform get_sum using lt_result
changing lv_sum.
write: / lv_sum.
endform.
form fill_keys changing ct_keys type ty_entries.
append :
'A' to ct_keys,
'C' to ct_keys,
'E' to ct_keys.
endform.
form get_sum using it_entries type ty_entries
changing value(ev_sum) type i.
field-symbols: <ls_test> type ztmp_test.
clear ev_sum.
loop at it_entries assigning <ls_test>.
add <ls_test>-value to ev_sum.
endloop.
endform.
I would use FOR ALL ENTRIES to fetch all the related rows, then LOOP round the resulting table and add up the relevant field into a total. If you have ABAP 740 or later, you can use REDUCE operator to avoid having to loop round the table manually:
DATA(total) = REDUCE i( INIT sum = 0
FOR wa IN itab NEXT sum = sum + wa-field ).
One possible approach is simultaneous summarizing inside SELECT loop using statement SELECT...ENDSELECT statement.
Sample with calculating all order lines/quantities for the plant:
TYPES: BEGIN OF ls_collect,
werks TYPE t001w-werks,
menge TYPE ekpo-menge,
END OF ls_collect.
DATA: lt_collect TYPE TABLE OF ls_collect.
SELECT werks UP TO 100 ROWS
FROM t001w
INTO TABLE #DATA(lt_werks).
SELECT werks, menge
FROM ekpo
INTO #DATA(order)
FOR ALL ENTRIES IN #lt_werks
WHERE werks = #lt_werks-werks.
COLLECT order INTO lt_collect.
ENDSELECT.
The sample has no business sense and placed here just for educational purpose.
Another more robust and modern approach is CTE (Common Table Expressions) available since ABAP 751 version. This technique is specially intended among others for total/subtotal tasks:
WITH
+plants AS (
SELECT werks UP TO 100 ROWS
FROM t011w ),
+orders_by_plant AS (
SELECT SUM( menge )
FROM ekpo AS e
INNER JOIN +plants AS m
ON e~werks = m~werks
GROUP BY werks )
SELECT werks, menge
FROM +orders_by_plant
INTO TABLE #DATA(lt_sums)
ORDER BY werks.
cl_demo_output=>display( lt_sums ).
The first table expression +material is your internal table, the second +orders_by_mat quantities totals selected by the above materials and the last query is the final output query.

Resources