I'm making a report in Cognos Report Studio and I'm having abit of trouble getting a count taht I need. What I need to do is count the number of IDs for a department. But I need to split the count between initiated and completed. If an ID occures more than once, it is to be counted as completed. The others, of course, will be initiated. So I'm trying to count the number of ID occurences for a distinct ID. Here is the query I've made in SQl Developer:
SELECT
COUNT((CASE WHEN COUNT(S.RFP_ID) > 8 THEN MAX(CT.GCT_STATUS_HISTORY_CLOSE_DT) END)) AS "Sales Admin Completed"
,COUNT((CASE WHEN COUNT(S.RFP_ID) = 8 THEN MIN(CT.GCT_STATUS_HISTORY_OPEN_DT) END)) as "Sales Admin Initiated"
FROM
ADM.B_RFP_WC_COVERAGE_DIM S
JOIN ADM.B_GROUP_CHANGE_REQUEST_DIM CR
ON S. RFP_ID = CR.GCR_RFP_ID
JOIN ADM.GROUP_CHANGE_TASK_FACT CT
ON CR.GROUP_CHANGE_REQUEST_KEY = CT.GROUP_CHANGE_REQUEST_KEY
JOIN ADM.B_DEPARTMENT_DIM D
ON D.DEPARTMENT_KEY = CT.DEPARTMENT_RESP_KEY
WHERE CR.GCR_CHANGE_TYPE_ID = '20'
AND S.RFP_LOB_IND = 'WC'
AND S.RFP_AUDIT_IND = 'N'
AND CR.GCR_RECEIVED_DT BETWEEN '01-JAN-13' AND '31-DEC-13'
AND D.DEPARTMENT_DESC = 'Sales'
AND CT.GCT_STATUS_IND = 'C'
GROUP BY S.RFP_ID ;
Now this works. But I'm not sure how to translate taht into Cognos. I tried doing a CASE taht looked liek this(this code is using basic names such as dept instead of D.DEPARTMENT_DESC):
CASE WHEN dept = 'Sales' AND count(ID for {DISTINCT ID}) > 1 THEN count(distinct ID)END)
I'm using count(distinct ID) instead of count(maximum(close_date)). But the results would be the same anyway. The "AND" is where I think its being lost. It obviously isn't the proper way to count occurences. But I'm hoping I'm close. Is there a way to do this with a CASE? Or at all?
--EDIT--
To make my question more clear, here is an example:
Say I have this data in my table
ID
---
1
2
3
4
2
5
5
6
2
My desired count output would be:
Initiated Completed
--------- ---------
4 2
This is because two of the distinct IDs (2 and 5) occure more than once. So they are counted as Completed. The ones that occure only once are counted as Initiated. I am able to do this in SQl Dev, but I can't figure out how to do this in Cognos Report Studio. I hope this helps to better explaine my issue.
Oh, I didn't quite got it originally, amending the answer.
But it's still easiest to do with 2 queries in Report Studio. Key moment is that you can use a query as a source for another query, guaranteeing proper group by's and calculations.
So if you have ID list in the table in Report Studio you create:
Query 1 with dataitems:
ID,
count(*) or count (1) as count_occurences
status (initiated or completed) with a formula: if (count_occurences > 1) then ('completed') else ('initiated').
After that you create a query 2 using query one as source with just 2 data items:
[Query1].[Status]
Count with formula: count([Query1].[ID])
That will give you the result you're after.
Here's a link to doco on how to nest queries:
http://pic.dhe.ibm.com/infocenter/cx/v10r1m0/topic/com.ibm.swg.ba.cognos.ug_cr_rptstd.10.1.0.doc/c_cr_rptstd_wrkdat_working_with_queries_rel.html?path=3_3_10_6#cr_rptstd_wrkdat_working_with_queries_rel
Related
I have the below code to do matching between two different tables. The code only updates the first record as "Matched".
I want to compare each record in ID field from T1 if it present in T2, e.g. To check if A present in T2 then go to next record in T1 and check B if it present in T2 through loop until all records in T1 matched
Table 1
ID
A
B
C
Table 2
ID
A
B
Expected Matching Results
ID
A
B
Any help please
If rs2("ID").Value = rs1("ID").Value Then
rs2.MoveNext()
Do While Not rs2.EOF()
rs1("Matching").Value = "Matched"
rs1.Update()
rs2.MoveFirst()
Loop
End If
Assuming I'm understanding your problem correctly, you just want to update the "Matching" column in T1 if there's a corresponding ID record in T2. Doing this in SQL would be ideal, as Peter said.
If you really want to loop, Peter is correct that you have an endless loop: you exit when you hit the end of the record set, but at the end of each iteration you are resetting back to the first record.
I don't know why you are looping through rs2 when you already found a match. Once you know that rs1("ID").Value = rs2("ID").Value then just update the Matching record in rs1 and call rs2.MoveFirst() so you can start back at the top when trying to find a match for the next ID in T1. No loop is required here.
I have a SQLite table of user actions on a website. Each row is the same action on a web site, just different time/date, tagged with a user id. The table has more than 20Million entries. I understand how to get a count by user (i.e. A took the action 3 times, B 4, C 2, D 4, etc.) using the group by function by user id. In other words this works fine:
select count(uid) as event_count
from table
group by uid
What I want is the data for a statistical distribution which is a count of the number of users who only took 1 action, a count of users that took 2 actions, etc. Said another way: The list might look something like:
1 | 339,440
2 | 452,555
3 | 99,239
5 | 20,209
etc. ...
I could use the having event_count = n clause and just rerun the query for every integer until all were accounted for but that seems silly. There must be a way that I can get a single list with two columns: the group size and the count of the users who all took the exact same number of actions.
As simply as adding another grouping above:
select event_count, count(*) as users_count
from
(select count(uid) as event_count
from table
group by uid) t
group by event_count
order by event_count
I was trying to find which customer has more number of records in a table, i got suggested by RANK function but its not the useful in finding the exact record , so i used this following snippet:
select count(customerkey),customerkey
FROM FILEMAPPERTEMPLATE
group by customerkey;
Result :
1 298,254
1 299,732
2 246,027
43 197,053
1 299,745
1 299,751
60 271,623
Though i am able to find how many reocrds attributed to a customerkey in the table, I couldn't find the single exact record(after executing the query ) that has maximum record fro a customer. Please help
I want only
60 271,623 as reult
select * from (select count(customerkey) cnt,customerkey
FROM FILEMAPPERTEMPLATE
group by customerkey order by cnt desc) where rownum<2;
I'm trying to create a single row of data, but the Group By clause is screwing me up.
Here's my table:
RegistrationPK : DateBirth : RegistrationDate
I'm trying to get the age of people at the time of Registration.
What I have is:
SELECT
CASE WHEN DATEDIFF(YEAR,DateBirth,RegistrationDate) < 20 THEN COUNT(registrationpk) END AS Under20
FROM dbo.Registration r
GROUP BY r.DOB, r.RegDate
Instead of getting one column "Under20" with one row of data, I get all the different DateBirth rows.
How can I do a DateDiff without a Group By?
Here it is:
SELECT
SUM (
CASE WHEN DATEDIFF(YEAR,DateBirth,regdate) < 20 THEN 1
ELSE 0 END
)
FROM dbo.Registration r
I got this to work, but I hate Selects within Selects. If anyone knows a simpler way I'd appreciate it. For some reason, when done as a select within a select, SQL doesn't require either statement to have the GroupBy clause.
SELECT COUNT(UNDER20) AS UNDER20
FROM (
SELECT
UNDER20 = CASE WHEN DATEDIFF(YEAR,DateBirth,regdate) < 20 THEN '1' END
FROM dbo.Registration r
) a
Input values to the query : 1-20
Values in the database : 4,5, 15,16
I would like a query that gives me results as following
Value - Count
===== - =====
1 - 3
6 - 9
17 - 3
So basically, first generate continuous numbers from 1 to 20, count available numbers.
I wrote a query but I can not get it to fully work:
with avail_ip as (
SELECT (0) + LEVEL AS val
FROM DUAL
CONNECT BY LEVEL < 20),
grouped_tab as (
select val,lead(val,1,0) over (order by val) next_val
from avail_ip u
where not exists (
select 'x' from (select 4 val from dual) b
where b.val=u.val) )
select
val,next_val-val difference,
count(*) over (partition by next_val-val) avail_count
from grouped_tab
order by 1
It gives me count but i am not sure how to compress the rows to just three rows.
I was not able to add complete query, I kept getting 'error occurred while submission'. For some reason it does not like union clause. So I am attaching query as a image :(
More details of exact requirement:
I am writing a ip management module and i need to find out available (free) ip addresses within a ip block. Block could be /16 or /24 or even /12. To make it even challenging, i also support IPv6 so will have lot more numbers to manage. All issued ip addresses are stored in decimal format. So my thought is to first generate all ip decimals within the block range from network address to broadcast address. For eg. in a /24, there would 255 addresses and in case of /16 would 64K.
Now, secondly find all used addresses within a block, and find out available number of address with a starting ip. So in the above example, starting 1 ip- 3 addresses are available, starting with 6, 9 are available .
My last concern would be the query should be able to run fast enough to run through millions of numbers.
And sorry again, if my original question was not clear enough.
Similar sort of idea to what you tried:
with all_values as (
select :start_val + level - 1 as val
from dual
connect by level <= (:end_val - :start_val) + 1
),
missing_values as (
select val
from all_values
where not exists (select null from t42 where id = val)
),
chains as (
select val,
val - (row_number() over (order by val) + :start_val - 1) as chain
from missing_values
)
select min(val), count(*) - 1 as gap_count
from chains
group by chain
order by min(val);
With start_val as 1 and end_val as 20, and your data in table t42, that gets:
MIN(VAL) GAP_COUNT
---------- ----------
1 3
6 9
17 4
I've made end_val inclusive though; not sure if if you want it to be inclusive or exclusive. And I've perhaps made it more flexible that you need - your version also assumes you're always starting from 1.
The all_values CTE is basically the same as your, generating all the numbers between the start and end values - 1 to 20 (inclusive!) in this case.
The missing_values CTE removes the values that are in the table, so you're left with 1,2,3,6,7,8,9,10,11,12,13,14,17,18,19,20.
The chains CTE does the magic part. This gets the difference between each value and where you would expect it to be in a contiguous list. The difference - what I've called 'chain' - is the same for all contiguous missing values; 1,2,3 all get 0, 6 to 14 all get 2, and 17 to 20 all get 4. That chain value can then be used to group by, and you can use the aggregate count and min to get the answer you need.
SQL Fiddle of a simplified version that is specifically for 1-20, showing the data from each intermediate step. This would work for any upper limit, just by changing the 20, but assumes you'll always start from 1.