Background:
Hey everyone! I'm hoping you can help me with something that I've been trying to figure out. I have a dataset/table called customer_universe that shows all of our in scope customers. Every row/cust_id in that table is unique.
Let's say this table has 60,000 total rows. Every cust_id entry in this table is unique so total rows = unique row count.
There is also a dataset that I created (customer_sport_product_purch) that lists out all of customers (from the customer_universe table) and any of the 3 in-scope sports products they purchased along with a purchase date. This tables only contains customers who have purchased one of the three sport products but since there are three sport products and a customer may have purchased multiple, cust_id field does not contain only unique customers.
Let's say this table has 46,000 total rows but only 25,000 unique customer.
Goal Query Output:
I need to write a query that lists out every customer in the customer_universe table and one more column with a binary (1/0) value that will indicate if they have purchased a sport product or not.
So this query output should have a total of 60000 records and only two columns.
Environment and Attempted Solutions Details
I'm currently building these queries using Impala in Hue. I'm trying to use a case statement to get me my desired result but I'm getting the error message provided below.
Customer_universe Table:
Cust_ID
Customer_Since
1
02-20-2019
2
01-13-2020
3
06-17-2012
4
06-19-2021
5
06-06-2017
Customer_sport_product_purch Table:
Cust ID
Product
Purch_Dt
1
Basketball
01-01-2022
1
BoxGlove
02-01-2020
5
BoxGlove
12-15-2019
Desired Query Output:
Cust_ID
Sport_Purch
1
1
2
0
3
0
4
0
5
1
Queries I've attempted and the Error Messages I've Received:
Query 1:
SELECT a.cust_id,
case when (a.cust_id in (select distinct b.cust_id from DB.customer_sport_purch b)
then 1 else 0 end as Sport_Purch
FROM DB.customer_universe
GROUP BY cust_id;
Error Message 1:
Error while compiling statement: FAILED: SemanticException [Error 10249]: line 2:72 Unsupported SubQuery Expression 'cust_id': Currently SubQuery expressions are only allowed as Where Clause predicates
Query 2:
SELET a.cust_id,
case when (a.cust_id in sportPurch) then 1 else 0 end as Sport_Purch
FROM DB.customer_universe a,
(select distinct cust_id from DB.customer_sport_purch) sportPurch
GROUP BY a.cust_id;
Error Message 2:
Error while compiling statement: FAILED: ParseException line 2:36 cannot recognize input near 'sportPurch' ')' 'then' in expression specification
Other Considerations:
I cannot bring bring the customer_sport_table.cust_id values into a text file and have the query read from file since those values will change frequently and need to be able to just re-execute queries.
Thanks in advance!
I have this Kusto code that I have been trying to develop and any help would be greatly appreciated.
The objective is to count to the first occurrence of the CurrentOwningTeamId in the OwningTeamId column.
I packed the Owning Team number and parsed the value into a column of its own. I need to count the owning teams until I get to the current owning team.
Columns are (example):
Objective: Count to the first occurrence of the CurrentOwningTeam value in the OwningTeamId column using Kusto (Application Insights code):
[CODE]
OwningTeamId, CurrenOwningTeam, CreateDate, RequestType
155523 **888888** 2017-07-02 PRIMARY
256924 **888888** 2017-08-02 TRANSFER
**888888** **888888** 2017-09-02 TRANSFER
954005 **888888** 2017-10-02 TRANSFER
**888888** **888888** 2017-11-02 TRANSFER
155523 **888888** 2017-12-02 TRANSFER
954005 **888888** 2017-13-02 TRANSFER
**888888** **888888** 2017-14-02 TRANSFER
[/CODE]
I think you can match the current owning team with the countof() function, but I don't know how to go about it using regex. Note: values are different with each owning team on every incident, is why I capture the owning team on the incident first and try to count the very first instance of the CurrentOwningTeam number in the OwningTeamId column. In other words I want to count the number of times it takes to get to the very first owning team. In this case, it would be three.
Note: OwningTeamId's and CurrentOwningTeam can change on every incident, I first capture the CurrentOwningTeam then try to match in the OwningTeamId column.
Note: This is just one incident, but I am trying to do multiple Incidents.
Below is how I got the Current Owning Team Value.
[/CODE]
| extend CurrentOwningTeam=pack_array(OwningTeamId)
| parse CurrentOwningTeam with * "[" CurrentOwningTeam:int "]" *
| serialize CurrentOwningTeam
[/CODE]
I tried using row_number() but it will not work for multiple incidents, only per incident, so I have to use count or countof functions or another way of doing it.
Thanks for clarification. Here is a suggestion for a query that counts ordered by-time rows until certain condition is reached (count is contextual using IncidentId key).
datatable(IncidentId:string, OwningTeamId:string, CurrentOwningTeam:string, CreateDate:datetime, RequestType:string)
[
'Id1','155523','888888',datetime(2017-02-07),'PRIMARY',
'Id1','256924','888888',datetime(2017-02-08),'TRANSFER',
'Id1','888888','888888',datetime(2017-02-09),'TRANSFER',
'Id1','954005','888888',datetime(2017-02-10),'TRANSFER',
'Id1','888888','888888',datetime(2017-02-11),'TRANSFER',
'Id1','155523','888888',datetime(2017-02-12),'TRANSFER',
'Id1','954005','888888',datetime(2017-02-13),'TRANSFER',
'Id1','888888','888888',datetime(2017-02-14),'TRANSFER',
// Id2
'Id2','155523','888888',datetime(2017-02-07),'PRIMARY',
'Id2','256924','888888',datetime(2017-02-08),'TRANSFER',
'Id2','999999','888888',datetime(2017-02-09),'TRANSFER',
'Id2','954005','888888',datetime(2017-02-10),'TRANSFER',
'Id2','888888','888888',datetime(2017-02-11),'TRANSFER',
'Id2','155523','888888',datetime(2017-02-12),'TRANSFER',
'Id2','954005','888888',datetime(2017-02-13),'TRANSFER',
'Id2','888888','888888',datetime(2017-02-14),'TRANSFER',
]
| order by IncidentId, CreateDate asc
| extend c= row_cumsum(1, IncidentId!=prev(IncidentId))
| where OwningTeamId == CurrentOwningTeam
| summarize arg_min(CreateDate, c) by IncidentId
Result:
IncidentId CreateDate c
Id1 2017-02-09 00:00:00.0000000 3
Id2 2017-02-11 00:00:00.0000000 5
Here are the links to the docs that point how to find earliest record using arg_min() aggregation, and link to the row_cumsum() (cumulative sum) function.
https://learn.microsoft.com/en-us/azure/kusto/query/arg-min-aggfunction
https://learn.microsoft.com/en-us/azure/kusto/query/rowcumsumfunction
I figured it out by using the RowNumber directly into grouping inside the table, then finally summing to get my total count.
[CODE]
| serialize Id
| extend RowNumber=row_number(1, (Id) ==Id)
| summarize TotalOwningTeamChanges=sum(RowNumber) by Id
[/CODE]
Then after that I got the Minimum Date to extract the entire data set to the first instance of the current OwningTeamName.
[CODE]
//Outside the scope of the table.
| extend ExtractFirstOwningTeamCreateDate=CreateDate2
| extend VeryFirstOwningTeamCreateDate=MinimumCreateDate
| where FirstOwningTeamRow == true or MinimumCreateDate <=
ExtractFirstOwningTeamCreateDate
| serialize VeryFirstOwningTeamCreateDate
[/CODE]
I have a scenario where i have to correct the history data. The current data is like below:
Status_cd event_id phase_cd start_dt end_dt
110 23456 30 1/1/2017 ?
110 23456 31 1/2/2017 ?
Status_cd event_id phase_cd start_dt end_dt
110 23456 30 1/1/2017 ?
111 23456 30 1/2/2017 ?
The major columns are status_cd and phase_cd. So, if any one of them change the history should be handled with the start dt of the next record as the end date of the previous record.
Here both the records are open which is not correct.
Please suggest on how to handle both the scenarios.
Thanks.
How are your history rows ordered in the table? In other words, how do you decide which history rows to compare to see if a value was changed? And how do you uniquely identify a history row entry?
If you order your history rows by start_dt, for example, you can compare the previous and current row values using window functions, like Rob suggested:
UPDATE MyHistoryTable
FROM (
-- Get source history rows that need to be updated
SELECT
history_row_id, -- Change this field to match your table
MAX(status_cd) OVER(ORDER BY start_dt ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS status_cd_next, -- Get "status_cd" value for "next" history row
MAX(phase_cd) OVER(ORDER BY start_dt ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS phase_cd_next,
MAX(start_dt) OVER(ORDER BY start_dt ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS start_dt_next
FROM MyHistoryTable
WHERE status_cd <> status_cd_next -- Check "status_cd" values are different
OR phase_cd <> phase_cd_next -- Check "phase_cd" values are different
) src
SET MyHistoryTable.end_dt = src.start_dt_next -- Update "end_dt" value of current history row to be "start_dt" value of next history row
WHERE MyHistoryTable.history_row_id = src.history_row_id -- Match source rows to target rows
This assumes you have a column to uniquely identify each history row, called "history_row_id". Give it a try and let me know.
I don't have a TD system to test on, so you may need to futz with the table aliases too. You'll also probably need to handle the edge cases (i.e. first/last rows in the table).
heres a table, the time when the query runs i.e now is 2010-07-30 22:41:14
number | person | timestamp
45 mike 2008-02-15 15:31:14
56 mike 2008-02-15 15:30:56
67 mike 2008-02-17 13:31:14
34 mike 2010-07-30 22:31:14
56 bob 2009-07-30 22:37:14
67 bob 2009-07-30 22:37:14
22 tom 2010-07-30 22:37:14
78 fred 2010-07-30 22:37:14
Id like a query that can add up the number for each person. Then only display the name totals which have a recent entry say last 60 minutes. The difficult seems to be, that although its possible to use AND timestamp > now( ) - INTERVAL 600, this has the affect of stopping the full sum of the number.
the results I would from above are
Mike 202
tom 22
fred 78
bob is not included his latest entry is not recent enough its a year old! mike although he has several old entries is valid because he has one entry recently - but key, it still adds up his full 'number' and not just those with the time period.
go on get your head round that one in a single query ! and thanks
andy.
You want a HAVING clause:
select name, sum(number), max(timestamp_column)
from table
group by name
HAVING max( timestamp_column) > now( ) - INTERVAL 600;
andrew - in the spirit of education, i'm not going to show the query (actually, i'm being lazy but don't tell anyone) :).
basically tho', you'd have to do a subselect within your main criteria select. in psuedo code it would be:
select person, total as (select sum(number) from table1 t2 where t2.person=t1.person)
from table1 t1 where timestamp > now( ) - INTERVAL 600
that will blow up, but you get the gist...
jim
I have a problem getting the right "Price" for a product based on Effectivity date.
Example, I have 2 tables:
a. "Transaction" table --> this contains the products ordered, and
b. "Item Master" table --> this contains the product prices and effectivity dates of those prices
Inside the Trasaction table:
INVOICE_NO INVOICE_DATE PRODUCT_PKG_CODE PRODUCT_PKG_ITEM
1234 6/29/2009 ProductA ProductA-01
1234 6/29/2009 ProductA ProductA-02
1234 6/29/2009 ProductA ProductA-03
Inside the "Item_Master" table:
PRODUCT_PKG_CODE PRODUCT_PKG_ITEM PRODUCT_ITEM_PRICE EFFECTIVITY_DATE
ProductA ProductA-01 25 6/1/2009
ProductA ProductA-02 22 6/1/2009
ProductA ProductA-03 20 6/1/2009
ProductA ProductA-01 15 5/1/2009
ProductA ProductA-02 12 5/1/2009
ProductA ProductA-03 10 5/1/2009
ProductA ProductA-01 19 4/1/2009
ProductA ProductA-02 17 4/1/2009
ProductA ProductA-03 15 4/1/2009
In my report, I need to display the Invoices and Orders,
as well as the Price of the Order Item which was effective
at the time it was paid (Invoice Date).
My query looks like this (my source db is Oracle):
SELECT T.INVOICE_NO,
T.INVOICE_DATE,
T.PRODUCT_PKG_CODE,
T.PRODUCT_PKG_ITEM,
P.PRODUCT_ITEM_PRICE FROM TRANSACTION T,
ITEM_MASTER P WHERE T.PRODUCT_PKG_CODE = P.PRODUCT_PKG_CODE
AND T.PRODUCT_PKG_ITEM = P.PRODUCT_PKG_ITEM
AND P.EFFECTIVITY_DATE <= T.INVOICE_DATE
AND T.INVOICE_NO = '1234';
...which shows 2 prices for each item.
I did some other different query styles
but to no avail, so I decided
it's time to get help. :)
Thanks to any of you who can
share your knowledge. --CJ--
p.s. Sorry, my post doesn't even look right! :D
If it's returning two rows with different effective dates that are less than the invoice date, you may want to change your date join to
'AND T.INVOICE_DATE = (
select max(effectivity_date)
from item_master
where effectivity_date < t.invoice_date)'
or something like that, to only get the one price that is the most recent one before the invoice date.
Analytics is your friend. You can use the FIRST_VALUE() function, for example, to get all the product_item_prices for the given product, sort by effectivity_date (descending), and just pick the first one. You'll need a DISTINCT as well so that only one row is returned for each transaction.
SELECT DISTINCT
T.INVOICE_NO,
T.INVOICE_DATE,
T.PRODUCT_PKG_CODE,
T.PRODUCT_PKG_ITEM,
FIRST_VALUE(P.PRODUCT_ITEM_PRICE)
OVER (PARTITION BY T.INVOICE_NO, T.INVOICE_DATE,
T.PRODUCT_PKG_CODE, T.PRODUCT_PKG_ITEM
ORDER BY P.EFFECTIVITY_DATE DESC)
as PRODUCT_ITEM_PRICE
FROM TRANSACTION T,
ITEM_MASTER P
WHERE T.PRODUCT_PKG_CODE = P.PRODUCT_PKG_CODE
AND T.PRODUCT_PKG_ITEM = P.PRODUCT_PKG_ITEM
AND P.EFFECTIVITY_DATE <= T.INVOICE_DATE
AND T.INVOICE_NO = '1234';
While your question's formatting is a bit too messy for me to get all the details, it sure does look like you're looking for the standard SQL construct ROW_NUMBER() OVER with both PARTITION and ORDER_BY -- it's in PostgreSql 8.4 and has been in Oracle [and MS SQL Server too, and DB2...] for quite a while, and it's the handiest way to select the "top" (or "top N") "by group" and with a certain order of anything in a SQL query. Look it up, see here for the PosgreSQL-specific docs.