Amazon Athena - how to count the distinct value? - count

I am trying to count the distinct value after group by two other columns.
In Oracle, this query will look like this:
SELECT column_1,
column_2,
column_3,
COUNT(DISTINCT (column_1) OVER(PARTITION by column_2, column_3) as "count_distinct"
FROM table;
In Athena, similarly, I did
SELECT column_1,
column_2,
column_3,
APPROX_DISTINCT(column_1) OVER(PARTITION by column_2, column_3) as "count_distinct"
FROM table;
However, I am not sure if approx_distinct is the same thing as count(distinct) because it is an approximation if I understood correctly.

You're correct that approx_distinct is an approximate aggregation function. It has small standard error, so it is often very useful & more efficient than count(DISTINCT x) would be.
Your original syntax does not work yet in Presto, which Athena is based on. See https://github.com/trinodb/trino/issues/5523

Related

SQLITE select unique rows

I have a table where rows appear to be "duplicates" but they are actually not (they have different date).
Suppose each record has a column A that is supposed to be unique. However due to this column A could or could not appear again later with updated information (with column A unchanged), it is no longer unique even when it should be.
Therefore I want the table with latest information only. Currently this table contains 500k entries, however the "true" number of unique entries is less than half of it.
I have tried
SELECT *
FROM TABLE
WHERE A = A
AND Date = (SELECT MAX(Date) from TABLE)
ORDER BY DATE
However this only returns 2 results. How do I achieve that?
The subquery on the date is the correct idea, but you must include the column A in the subquery and relate it back to the main table. I prefer to use explicit joins rather than embedding the subquery in the WHERE statement. This is usually more efficient anyway.
SELECT TABLE.*
FROM TABLE INNER JOIN
(SELECT A, MAX(Date) AS MaxDate FROM TABLE GROUP BY A) AS latest
ON TABLE.A = latest.A AND TABLE.date = latest.MaxDate
ORDER BY A, date
Or even better, I prefer CTE (Common Table Expression) syntax, since it makes the individual queries easier to read:
WITH latest AS (
SELECT A, MAX(Date) AS MaxDate
FROM TABLE
GROUP BY A
)
SELECT TABLE.*
FROM TABLE INNER JOIN latest
ON TABLE.A = latest.A AND TABLE.date = latest.MaxDate
ORDER BY TABLE.A, TABLE.date
Comparison to other answer
The answer by MikeT relies on a non-standard feature of sqlite. That is okay of itself as long as you are aware that the solution is not compatible with other databases engines/servers and SQL dialects.
The next possible gotcha really relies on your actual data and table schema (neither of which you shared in the question details). If your data allows multiple rows with the same date for the a single A column value, then the conditions in your question are not enough to definitively remove all duplicates. You would need to identify another column by which to resolve any remaining duplicates, but once again your question did not do that.
However, in testing, I found that my solution allows unresolved duplicates to remain in the results. MikeT's solution eliminate all duplicates, but it does so by arbitrarily excluding one of those duplicates. There are ways to fix either solution to definitely select which duplicate to keep, but I will not even attempt that unless you post actual data and the table schema so that my answer is not just mere guessing. I'm glad that my answer was useful thus far, but you need to understand your data better (than reveal in the question) to ensure what solution is actually best.
Bonus
Against my better judgement to just keep expanding on answers... since you should really research this separately... here's an example of how you would continue joining this with other queries...
WITH latest AS (
SELECT A, MAX(Date) AS MaxDate
FROM TABLE
GROUP BY A
),
firstResults AS (
SELECT TABLE.*
FROM TABLE INNER JOIN latest
ON TABLE.A = latest.A AND TABLE.date = latest.MaxDate
ORDER BY TABLE.A, TABLE.date
)
SELECT otherTable.*
FROM firstResults JOIN otherTable
ON firstResults.A = otherTable.A
WHERE somecondition = 'foobar'
Another approach if you're using a somewhat recent version of sqlite (3.25 or newer), using the row_number() window function to rank groups of the same a value by date and picking the first one:
WITH cte AS
(SELECT a, date, row_number() OVER (PARTITION BY a ORDER BY date DESC) AS rn
FROM yourtable)
SELECT a, date
FROM cte
WHERE rn = 1;
One important thing to note since I noticed you mentioning another answer was slow is that an index on mytable(a, date DESC) will be needed for this query for best results, and an index on mytable(a, date) will speed up the other answers given.
I believe, if I understand what you have written, that you could use :-
SELECT a,max(date), other FROM mytable GROUP BY a ORDER BY date;
note that the other column represents other columns (if present)
However, the other column will be an arbritary value (from one of the grouped columns) which may well be the required value (in the example it is).
As per :-
Each expression in the result-set is then evaluated once for each
group of rows. If the expression is an aggregate expression, it is
evaluated across all rows in the group. Otherwise, it is evaluated
against a single arbitrarily chosen row from within the group. If
there is more than one non-aggregate expression in the result-set,
then all such expressions are evaluated for the same row.
SQL As Understood By SQLite - SELECT
More correctly, to eliminate an arbritary value(sic) for the other column, you could use :-
SELECT
a /* will always be the same and isn't arbritary */,
max(date) /* will be the maximum data */ AS date,
(SELECT other FROM mytable WHERE a = m.a AND date = m.date) AS other
FROM mytable AS m /* AS m allows the outer query to be distinguished from the inner query */
GROUP BY a /* this effectivel removes duplicates on the a column */
ORDER BY date
;
The example below appears to produce the same result.
Example :-
Using the following to populate the table with some generated testing data :-
CREATE TABLE IF NOT EXISTS mytable (a TEXT, date TEXT, other);
WITH cte(count,a,date,other) AS
(
SELECT 1,1,date('now','+'||(random() % 30)||' days'),'other1'
UNION ALL SELECT count+1,abs(random()) % 20,date('now','+'||(abs(random()) % 30)||' days'), 'other'||(count+1) FROM cte LIMIT 100
INSERT INTO mytable (a,date,other) SELECT a,date,other FROM cte
;
SELECT * FROM mytable ORDER BY DATE DESC;
in this case :-
Highlighted rows being those required to be extracted.
Then after the above has been run the following is run
SELECT * FROM mytable WHERE a = a AND date = (SELECT MAX(date) FROM mytable);
SELECT * FROM mytable WHERE /*a = a AND*/ date = (SELECT MAX(date) FROM mytable);
/* Will only select 1 row per unique value of a BUT other will be an arbritary value not necessairlly the latest */
SELECT a,max(date), other FROM mytable GROUP BY a /* group by effectively display unique */;
SELECT
a /* will always be the same and isn't arbritary */,
max(date) /* will be the maximum data */ AS date,
(SELECT other FROM mytable WHERE a = m.a AND date = m.date) AS other
FROM mytable AS m
GROUP BY a
;
The first two results show that a = a does nothing as it will always be true.
The thrid query produces (unordered) :-
Note ticks assigned by checking the value of other from the previous result.
In this case this shorter query works OK even though values of other are arbritary values (they aren't really as it depends upon how the query planner plasn the query).
The fourth, the more correct, produces the same results :-
Result 2 (your orignal query) and 3 (original without a = a) produce :-
and :-

Difference between Qualify and Having

Can someone please explain me, what is the difference between qualify...over...partition by and group by...having in Teradata?I would also like to know if there are any differences in their performances.
QUALIFY is a proprietary extension to filter the result of a Windowed Aggregate Function.
A query is logically processed in a specific order:
FROM: create the basic result set
WHERE: remove rows from the previous result set
GROUP BY: apply aggregate functions on the previous result set
HAVING: remove rows from the previous result set
OVER: apply windowed aggregate functions on the previous result set
QUALIFY: remove rows from the previous result set
Having clause is used to filter the result set of the aggregate functions like (COUNT,min,max etc)
they eliminate rows based from groups based on some criteria like this :-
SELECT dept_no, MIN(salary), MAX(salary), AVG(salary)
FROM employee
WHERE dept_no IN (100,300,500,600)
GROUP BY dept_no
HAVING AVG(salary) > 37000;
The QUALIFY clause eliminates rows based on the function value, returning a new value for each of the participating rows.
It works on the final result set.
SELECT NAME,LOCATION FROM EMPLOYEE
QUALIFY ROW_NUMBER() OVER ( PARTITION BY NAME ORDER BY JOINING_DATE DESC) = 1;
We can club both having and qualify as well in a query if we use both aggregate and analytical fucntion like below:-
SELECT StoreID, SUM(sale),
SUM(profit) OVER (PARTITION BY StoreID)
FROM facts
GROUP BY StoreID, sale, profit
HAVING SUM(sale) > 15
QUALIFY SUM(profit) OVER (PARTITION BY StoreID) > 2;
You can see there order of execution from dnoeth answer.

Hive: COUNT features requires GROUP BY when using HAVING, work around?

I'm curious if there is a workaround for excluding a field in the 'group by' statement in Hive?
select g.country, count(*) as road_count
from geography g
join g_street gs on (g.id=gs.id)
group by g.iso_country_code, g.virtual
having (g.virtual='f' or g.virtual is null)
;
I do not want the 'g.virtual' in the group by statement because my result should be grouped by country only. Hive requires the 'g.virtual' in the group by statement.
Thanks in advance!
I am not sure about what you are trying to achieve with the query. Since I see fields in select which don't appear in the group by statement. The only suggestion that I can give is if you plan to put a restriction on geography table then you can place a where clause before joining it with g_street and then group by on the required fields.
Here is an example :
select g.iso_country_code, count(*)
from geography g
where g.virtual='f' or g.virtual is null
join g_street gs on (g.id=gs.id)
group_by g.iso_country_code

sqlite subqueries with group_concat as columns in select statements

I have two tables, one contains a list of items which is called watch_list with some important attributes and the other is just a list of prices which is called price_history. What I would like to do is group together 10 of the lowest prices into a single column with a group_concat operation and then create a row with item attributes from watch_list along with the 10 lowest prices for each item in watch_list. First I tried joins but then I realized that the operations where happening in the wrong order so there was no way I could get the desired result with a join operation. Then I tried the obvious thing and just queried the price_history for every row in the watch_list and just glued everything together in the host environment which worked but seemed very inefficient. Now I have the following query which looks like it should work but it's not giving me the results that I want. I would like to know what is wrong with the following statement:
select w.asin,w.title,
(select group_concat(lowest_used_price) from price_history as p
where p.asin=w.asin limit 10)
as lowest_used
from watch_list as w
Basically I want the limit operation to happen before group_concat does anything but I can't think of a sql statement that will do that.
Figured it out, as somebody once said "All problems in computer science can be solved by another level of indirection." and in this case an extra select subquery did the trick:
select w.asin,w.title,
(select group_concat(lowest_used_price)
from (select lowest_used_price from price_history as p
where p.asin=w.asin limit 10)) as lowest_used
from watch_list as w

Sqlite group_concat ordering

In Sqlite I can use group_concat to do:
1...A
1...B
1...C
2...A
2...B
2...C
1...C,B,A
2...C,B,A
but the order of the concatenation is random - according to docs.
I need to sort the output of group_concat to be
1...A,B,C
2...A,B,C
How can I do this?
Can you not use a subselect with the order by clause in, and then group concat the values?
Something like
SELECT ID, GROUP_CONCAT(Val)
FROM (
SELECT ID, Val
FROM YourTable
ORDER BY ID, Val
)
GROUP BY ID;
To be more precise, according to the docs:
The order of the concatenated elements is arbitrary.
It does not really mean random, it just means that the developers reserve the right to use whatever ordering they whish, even different ones for different queries or in different SQLite versions.
With the current version, this ordering might be the one implied by Adrian Stander's answer, as his code does seem to work. So you might just guard yourself with some unit tests and call it a day. But without examining the source code of SQLite really closely you can never be 100% sure this will always work.
If you are willing to build SQLite from source, you can also try to write your own user-defined aggregate function, but there is an easier way.
Fortunately, since version 3.25.0, you have window functions, providing a guaranteed-to-work, although somewhat ugly solution to your problem.
As you can see in the documentation, window functions have their own ORDER BY clauses:
In the example above, the window frame consists of all rows between the previous row ("1 PRECEDING") and the following row ("1 FOLLOWING"), inclusive, where rows are sorted according to the ORDER BY clause in the window-defn (in this case "ORDER BY a").
Note, that this alone would not necessarily mean that all aggregate functions respect the ordering inside a window frame, but if you take a look at the unit tests, you can see this is actually the case:
do_execsql_test 4.10.1 {
SELECT a,
count() OVER (ORDER BY a DESC),
group_concat(a, '.') OVER (ORDER BY a DESC)
FROM t2 ORDER BY a DESC
} {
6 1 6
5 2 6.5
4 3 6.5.4
3 4 6.5.4.3
2 5 6.5.4.3.2
1 6 6.5.4.3.2.1
0 7 6.5.4.3.2.1.0
}
So, to sum it up, you can write
SELECT ID, GROUP_CONCAT(Val) OVER (PARTITION BY ID ORDER BY Val) FROM YourTable;
resulting in:
1|A
1|A,B
1|A,B,C
2|A
2|A,B
2|A,B,C
Which unfortunately also contains every prefix of your desired aggregations. Instead you want to specify the window frames to always contain the full range, then discard the redundant values, like this:
SELECT DISTINCT ID, GROUP_CONCAT(Val)
OVER (PARTITION BY ID ORDER BY Val ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
FROM YourTable;
or like this:
SELECT * FROM (
SELECT ID, GROUP_CONCAT(Val)
OVER (PARTITION BY ID ORDER BY Val ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
FROM YourTable
)
GROUP BY ID;
Stumbling upon the underlying sorting-problem I tried this:
(... on 10.4.18-MariaDB)
select GROUP_CONCAT(ex.ID) as ID_list
FROM (
SELECT usr.ID
FROM (
SELECT u1.ID as ID
FROM table_users u1
) usr
GROUP BY ID
) ex
... and found the serialized ID_list ordered!
But I don't have an explanation for this now "correct" (?) result.

Resources