I am a beginner in pl/sql so don't be too harsh.
I have a table with Column_A(Current month amount) and Column_B (previous month amount) as number. I need to write a condition for some calculations: "column_A - Column_b=result. If result > 0 (meaning that there is an increase in current month compared to previous), the result + column_A.
I don't know how to write this one.
You can try a query like below.
UPDATE your_table SET column_A=
( CASE
WHEN (column_A - Column_b)>0 THEN (column_A +(column_A - Column_b))
ELSE (column_A)
END )
This will check for all records that have a difference greater than zero and will update the column_A with the result which is a sum of Columns_A and the difference.
Hope this helps. Wish you a great learning!
Edited:
Well if you are just trying to manipulate data for display then you can simplify your query as below, which will do the same functionality.
SELECT (CASE
WHEN (Current_month_amount - previous_month_amount)>0 THEN
(Current_month_amount +(Current_month_amount -
previous_month_amount))
ELSE (Current_month_amount)
END ) AS Current_month_amount,
previous_month_amount,
(Current_month_amount - previous_month_amount) AS Amount_Difference
from table_1
Related
I have two tables: one with record of a person with initial number, and a second one with records of changes to this number.
During a join, I do coalesce(latest_of_series, initial) to get a singular number per person. So good so far.
I also group this numbers into a groups, on order these groups separately. I know I can do:
select
coalesce(latest, initial) as final,
case
when coalesce(latest, inital) > 1 and coalesce(latest, inital) < 100 then 'group 1'
-- other cases
end as group
-- rest of the query
but that's of course horribly unreadable.
I tried:
select
coalesce(latest_of_series, initial_if_no_series) as value,
case
when value > 1 and value < 100 then 'group 1'
-- rest of the cases
end as group
-- rest of the query
but then the sqlite complains that there's no column "value"
Is there really no way of using previous result of coalesce as a "variable"?
That's not an SQLite limitation. That's an SQL limitation.
All the column names are decided as one. You can't define a column in line 2 of your query and then refer to it in line 3 of your query. All columns derive from the tables you select, each on their own, they can't "see" each other.
But you can use nested queries.
select
value,
case
when value >= 1 and value < 100 then 'group 1'
when value >= 100 and value < 200 then 'group 2'
else 'group 3'
end value_group
from
(
select
coalesce(latest_of_series, initial_if_no_series) as value
from
my_table
group by
user_id
) v
This way, the columns of the inner query can be decided as one, and the columns of the outer query can be decided as one. It might ever be faster, depending on the circumnstances.
I'm trying to do a count(case when) in Amazon Redshift.
Using this reference, I wrote:
select
sfdc_account_key,
record_type_name,
vplus_stage,
vplus_stage_entered_date,
site_delivered_date,
case when vplus_stage = 'Lost' then -1 else 0 end as stage_lost_yn,
case when vplus_stage = 'Lost' then 2000 else 0 end as stage_lost_revenue,
case when vplus_stage = 'Lost' then datediff(month,vplus_stage_entered_date,CURRENT_DATE) else 0 end as stage_lost_months_since,
count(case when vplus_stage = 'Lost' then 1 else 0 end) as stage_lost_count
from shared.vplus_enrollment_dim
where record_type_name = 'APM Website';
But I'm getting this error:
[42803][500310] [Amazon](500310) Invalid operation: column "vplus_enrollment_dim.sfdc_account_key" must appear in the GROUP BY clause or be used in an aggregate function; java.lang.RuntimeException: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: column "vplus_enrollment_dim.sfdc_account_key" must appear in the GROUP BY clause or be used in an aggregate function;
Query was running fine before I added the count. I'm not sure what I'm doing wrong here -- thanks!
You can not have an aggregate function (sum, count etc) without group by
The syntax is like this
select a, count(*)
from table
group by a (or group by 1 in Redshift)
In your query you need to add
group by 1,2,3,4,5,6,7,8
because you have 8 columns other than count
Since I don't know your data and use case I can not tell you it will give you the right result, but SQL will be syntactically correct.
The basic rule is:
If you are using an aggregate function (eg COUNT(...)), then you must supply a GROUP BY clause to define the grouping
Exception: If all columns are aggregates (eg SELECT COUNT(*), AVG(sales) FROM table)
Any columns that are not aggregate functions must appear in the GROUP BY (eg SELECT year, month, AVG(sales) FROM table GROUP BY year, month)
Your query has a COUNT() aggregate function mixed-in with non-aggregate values, which is giving rise to the error.
In looking at your query, you probably don't want to group on all of the columns (eg stage_lost_revenue and stage_lost_months_since don't look like likely grouping columns). You might want to mock-up a query result to figure out what you actually want from such a query.
I'm trying to create a single row of data, but the Group By clause is screwing me up.
Here's my table:
RegistrationPK : DateBirth : RegistrationDate
I'm trying to get the age of people at the time of Registration.
What I have is:
SELECT
CASE WHEN DATEDIFF(YEAR,DateBirth,RegistrationDate) < 20 THEN COUNT(registrationpk) END AS Under20
FROM dbo.Registration r
GROUP BY r.DOB, r.RegDate
Instead of getting one column "Under20" with one row of data, I get all the different DateBirth rows.
How can I do a DateDiff without a Group By?
Here it is:
SELECT
SUM (
CASE WHEN DATEDIFF(YEAR,DateBirth,regdate) < 20 THEN 1
ELSE 0 END
)
FROM dbo.Registration r
I got this to work, but I hate Selects within Selects. If anyone knows a simpler way I'd appreciate it. For some reason, when done as a select within a select, SQL doesn't require either statement to have the GroupBy clause.
SELECT COUNT(UNDER20) AS UNDER20
FROM (
SELECT
UNDER20 = CASE WHEN DATEDIFF(YEAR,DateBirth,regdate) < 20 THEN '1' END
FROM dbo.Registration r
) a
The Transact-Sql Count Distinct operation counts all non-null values in a column. I need to count the number of distinct values per column in a set of tables, including null values (so if there is a null in the column, the result should be (Select Count(Distinct COLNAME) From TABLE) + 1.
This is going to be repeated over every column in every table in the DB. Includes hundreds of tables, some of which have over 1M rows. Because this needs to be done over every single column, adding Indexes for every column is not a good option.
This will be done as part of an ASP.net site, so integration with code logic is also ok (i.e.: this doesn't have to be completed as part of one query, though if that can be done with good performance, then even better).
What is the most efficient way to do this?
Update After Testing
I tested the different methods from the answers given on a good representative table. The table has 3.2 million records, dozens of columns (a few with indexes, most without). One column has 3.2 million unique values. Other columns range from all Null (one value) to a max of 40K unique values. For each method I performed four tests (with multiple attempts at each, averaging the results): 20 columns at one time, 5 columns at one time, 1 column with many values (3.2M) and 1 column with a small number of values (167). Here are the results, in order of fastest to slowest
Count/GroupBy (Cheran)
CountDistinct+SubQuery (Ellis)
dense_rank (Eriksson)
Count+Max (Andriy)
Testing Results (in seconds):
Method 20_Columns 5_Columns 1_Column (Large) 1_Column (Small)
1) Count/GroupBy 10.8 4.8 2.8 0.14
2) CountDistinct 12.4 4.8 3 0.7
3) dense_rank 226 30 6 4.33
4) Count+Max 98.5 44 16 12.5
Notes:
Interestingly enough, the two methods that were fastest (by far, with only a small difference in between then) were both methods that submitted separate queries for each column (and in the case of result #2, the query included a subquery, so there were really two queries submitted per column). Perhaps because the gains that would be achieved by limiting the number of table scans is small in comparison to the performance hit taken in terms of memory requirements (just a guess).
Though the dense_rank method is definitely the most elegant, it seems that it doesn't scale well (see the result for 20 columns, which is by far the worst of the four methods), and even on a small scale just cannot compete with the performance of Count.
Thanks for the help and suggestions!
SELECT COUNT(*)
FROM (SELECT ColumnName
FROM TableName
GROUP BY ColumnName) AS s;
GROUP BY selects distinct values including NULL. COUNT(*) will include NULLs, as opposed to COUNT(ColumnName), which ignores NULLs.
I think you should try to keep the number of table scans down and count all columns in one table in one go. Something like this could be worth trying.
;with C as
(
select dense_rank() over(order by Col1) as dnCol1,
dense_rank() over(order by Col2) as dnCol2
from YourTable
)
select max(dnCol1) as CountCol1,
max(dnCol2) as CountCol2
from C
Test the query at SE-Data
A development on OP's own solution:
SELECT
COUNT(DISTINCT acolumn) + MAX(CASE WHEN acolumn IS NULL THEN 1 ELSE 0 END)
FROM atable
Run one query that Counts the number of Distinct values and adds 1 if there are any NULLs in the column (using a subquery)
Select Count(Distinct COLUMNNAME) +
Case When Exists
(Select * from TABLENAME Where COLUMNNAME is Null)
Then 1 Else 0 End
From TABLENAME
You can try:
count(
distinct coalesce(
your_table.column_1, your_table.column_2
-- cast them if you want replace value from column are not same type
)
) as COUNT_TEST
Function coalesce help you combine two columns with replace not null values.
I used this in mine case and success with correctly result.
Not sure this would be the fastest but might be worth testing. Use case to give null a value. Clearly you would need to select a value for null that would not occur in the real data. According to the query plan this would be a dead heat with the count(*) (group by) solution proposed by Cheran S.
SELECT
COUNT( distinct
(case when [testNull] is null then 'dbNullValue' else [testNull] end)
)
FROM [test].[dbo].[testNullVal]
With this approach can also count more than one column
SELECT
COUNT( distinct
(case when [testNull1] is null then 'dbNullValue' else [testNull1] end)
),
COUNT( distinct
(case when [testNull2] is null then 'dbNullValue' else [testNull2] end)
)
FROM [test].[dbo].[testNullVal]
Say I have an interval like
4 days 10:00:00
in postgres. How do I convert that to a number of hours (106 in this case?) Is there a function or should I bite the bullet and do something like
extract(days, my_interval) * 24 + extract(hours, my_interval)
Probably the easiest way is:
SELECT EXTRACT(epoch FROM my_interval)/3600
If you want integer i.e. number of days:
SELECT (EXTRACT(epoch FROM (SELECT (NOW() - '2014-08-02 08:10:56')))/86400)::int
To get the number of days the easiest way would be:
SELECT EXTRACT(DAY FROM NOW() - '2014-08-02 08:10:56');
As far as I know it would return the same as:
SELECT (EXTRACT(epoch FROM (SELECT (NOW() - '2014-08-02 08:10:56')))/86400)::int;
select floor((date_part('epoch', order_time - '2016-09-05 00:00:00') / 3600)), count(*)
from od_a_week
group by floor((date_part('epoch', order_time - '2016-09-05 00:00:00') / 3600));
The ::int conversion follows the principle of rounding.
If you want a different result such as rounding down, you can use the corresponding math function such as floor.
If you convert table field:
Define the field so it contains seconds:
CREATE TABLE IF NOT EXISTS test (
...
field INTERVAL SECOND(0)
);
Extract the value. Remember to cast to int other wise you can get an unpleasant surprise once the intervals are big:
EXTRACT(EPOCH FROM field)::int
If you want to display your result only in date type after adding the interval then, should try this
Select (current_date + interval 'x day')::date;
I'm working with PostgreSQL 11, and I created a function to get the hours betweeen 2 differents timestamps
create function analysis.calcHours(datetime1 timestamp, datetime2 timestamp)
returns integer
language plpgsql as $$
declare
diff interval;
begin
diff = datetime2 - datetime1;
return (abs(extract(days from diff))*24 + abs(extract(hours from diff)))::integer;
end; $$;
select date 'now()' - date '1955-12-15';
Here is the simple query which calculates total no of days.