Counting 2 Columns from Separate Tables Between 2 Dates - sqlite

I have a query that Counts 2 columns from 2 separate tables using subqueries, which works. Now I have to implement into this query the ability to filter out these results based on the Date of a Call Record. I will post the query in which I am working with:
SELECT (m.FirstName || " " || m.LastName) AS Members,
(
SELECT count(CallToLineOfficers.MemberID)
FROM CallToLineOfficers
WHERE CallToLineOfficers.MemberID = m.MemberID
)
+ (
SELECT count(CallToMembers.MemberID)
FROM CallToMembers
WHERE CallToMembers.MemberID = m.MemberID
) AS Tally
FROM Members AS m, Call, CallToMembers, CallToLineOfficers
Join Call on CallToMembers.CallID = Call.CallID
and CallToLineOfficers.CallID = Call.CallI
WHERE m.FirstName <> 'None'
-- and Call.Date between '2017-03-21' and '2017-03-22'
GROUP BY m.MemberID
ORDER BY m.LastName ASC;
Ok, so table Call stores the Date and its PK is CallID. Both CallToLineOfficers and CallToMembers are Bridge Tables that also contain only CallID and MemberID. With the current query, where the Date is commented out, that Date range should only return all names, but a count of 1 should appear under 1 person's name.
I have tried joining Call.CallID with both Bridge Tables' CallIDs without any luck, though I think this is the right way to do it. Could someone help point me in the right direction? I am lost. (I tried explaining this the best I could, so if you need more info, let me know.)
UPDATED: Here is a screenshot of what I am getting:
Based on the provided date in the sample, the new results, with the Date, should be:
Bob Clark - 1
Rob Catalano - 1
Matt Butler - 1
Danielle Davidson - 1
Jerry Chuska - 1
Tom Cramer - 1
Everyone else should be 0.

At the moment, the subqueries filter only on the member ID. So for any member ID in the outer query, they return the full count.
To reduce the count, you have to filter in the subqueries:
SELECT (FirstName || " " || LastName) AS Members,
(
SELECT count(*)
FROM CallToLineOfficers
JOIN Call USING (CallID)
WHERE MemberID = m.MemberID
AND Date BETWEEN '2017-03-21' AND '2017-03-22'
)
+ (
SELECT count(*)
FROM CallToMembers
JOIN Call USING (CallID)
WHERE MemberID = m.MemberID
AND Date BETWEEN '2017-03-21' AND '2017-03-22'
) AS Tally
FROM Members AS m
WHERE FirstName <> 'None'
ORDER BY LastName ASC;

Related

Cannot replace a string with several random strings taken from another table in sqlite

I'm trying to replace a placeholder string inside a selection of 10 random records with a random string (a name) taken from another table, using only sqlite statements.
i've done a subquery in order to replace() of the placeholder with the results of a subquery. I thought that each subquery loaded a random name from the names table, but i've found that it's not the case and each placeholder is replaced with the same string.
select id, (replace (snippet, "%NAME%", (select
name from names
where gender = "male"
) )
) as snippet
from imagedata
where timestamp is not NULL
order by random()
limit 10
I was expecting for each row of the SELECT to have different random replacement every time the subquery is invoked.
hello i'm %NAME% and this is my house
This is the car of %NAME%, let me know what you think
instead each row has the same kind of replacement:
hello i'm david and this is my house
This is the car of david, let me know what you think
and so on...
I'm not sure it can be done inside sqlite or if i have to do it in php over two different database queries.
Thanks in advance!
Seems that random() in the subquery is only evaluated once.
Try this:
select
i.id,
replace(i.snippet, '%NAME%', n.name) snippet
from (
select
id,
snippet,
abs(random()) % (select count(*) from names where gender = 'male') + 1 num
from imagedata
where timestamp is not NULL
order by random() limit 10
) i inner join (
select
n.name,
(select count(*) from names where name < n.name and gender = 'male') + 1 num
from names n
where gender = 'male'
) n on n.num = i.num

Why would oracle subquery with AND & OR return returning wrogn results set

I have two subqueries. as shown below. the first query works fine but the second query which is basically the first query that I modified to use AND & OR, doesn't work in the sense that it doesn't return ID as expected. any suggestions on what is happening here?
1. (SELECT * FROM (SELECT EMPID FROM EVENT_F
INNER JOIN WCINFORMATION_D
ON EVENT_F.JOB_INFO_ROW_WID= WCINFORMATION_D.ROW_WID
INNER JOIN WCANDIDATE_D ON WCCANDIDATE_D.ROW_WID = VENT_F.CANDIDATE_ROW_WID
WHERE STEP_NAME = 'Offer'
AND WCINFORMATION_D.JOB_FAMILY_NAME IN ('MDP','ELP','Emerging Leader Program','Other')
AND TITLE NOT IN ('Student Ambassador Program for Eligible Summer Interns','Student Ambassador')
AND PI_CANDIDATE_NUM = OUTERAPP.PI_CANDIDATE_NUM
--limit 1
ORDER BY CREATION_DT ASC
) T1 WHERE ROWNUM=1) AS A_ID,
2.(SELECT * FROM (SELECT EMPID FROM EVENT_F
INNER JOIN WCINFORMATION_D
ON EVENT_F.JOB_INFO_ROW_WID= WCINFORMATION_D.ROW_WID
INNER JOIN WCANDIDATE_D ON WCCANDIDATE_D.ROW_WID = VENT_F.CANDIDATE_ROW_WID
WHERE STEP_NAME = 'Offer'
AND WCINFORMATION_D.JOB_FAMILY_NAME IN ('MDP','ELP','Emerging Leader Program','Other') or WCINFORMATION_D.JOB_FAMILY_NAME NOT IN ('MDP','ELP','Emerging Leader Program','Other')
AND TITLE NOT IN ('Student Ambassador Program for Eligible Summer Interns','Student Ambassador')
AND PI_CANDIDATE_NUM = OUTERAPP.PI_CANDIDATE_NUM
--limit 1
ORDER BY CREATION_DT ASC
) T1 WHERE ROWNUM=1) AS A_ID,
If you're wanting to get the count of people in one set of job families, plus a count of people in another set, you need to use a conditional count, e.g. something along the lines of:
SELECT COUNT(CASE WHEN wid.job_family_name IN ('MDP', 'ELP', 'Emerging Leader Program', 'Other') THEN 1 END) job_family_grp1,
COUNT(CASE WHEN wid.job_family_name IS NULL OR wid.job_family_name NOT IN ('MDP', 'ELP', 'Emerging Leader Program', 'Other') THEN 1 END) job_family_grp2
FROM event_f ef
INNER JOIN wcinformation_d wid
ON ef.job_info_row_wid = wid.row_wid
INNER JOIN wcandidate_d wcd
ON wcd.row_wid = ef.candidate_row_wid
WHERE step_name = 'Offer' -- alias this column name
AND title NOT IN ('Student Ambassador Program for Eligible Summer Interns', 'Student Ambassador') -- alias this column name;
You will most likely need to amend this to work for your particular case (it'll have to go as a join into your main query, given there are two columns being selected) since you didn't provide enough information in your question to give us the wider context.

Column Relationships

I realize i'm far off the solution with what i have:
Select FirstName || ' ' || LastName AS Manager From Employee
Where (Select COUNT(ReportsTo) from Employee
group by ReportsTo
order by ReportsTo desc);
ReportsTovalues are the EmployeeID they report to
What i want is to query the name of the employee with the most Employees reporting to them and who they in turn report to without nulls. I'm Not sure how to make the connections between columns values such as ReportsTo to EmployeeID so any explanation would help
For Example the output i would want is two columns say | Fred Jones | Mary Anne| the first being the employee with the most reportsTo with the same value as their EmployeeID and the second being the name of the employee with the same EmployeeID as the first employees ReportTo
Do this step by step:
First step: Count how many employees report to a person.
select reportsto, count(*) from employee group by reportsto;
We can order this result by count(*) and limit it to only get one row, so as to get the person with the most reporters. Only problem is: What to do in case of ties, i.e. two persons have the same highest amount of reporters? SQLite doesn't offer much to help here. We'll have to query twice:
select reportsto
from employee
group by reportsto
having count(*) =
(
select count(*)
from employee
group by reportsto
order by count(*) desc
limit 1
);
Next step: Get the name. That means we must access the table again.
select
firstname || ' ' || lastname as manager
from employee
where e1.employeeid in
(
select reportsto
from employee
group by reportsto
having count(*) =
(
select count(*)
from employee
group by reportsto
order by count(*) desc
limit 1
)
);
Last step: Get the persons our found managers themselves report to. These can be many, so we group by manager and concatenate all those they report to.
select
e1.firstname || ' ' || e1.lastname as manager,
group_concat(e2.firstname || ' ' || e2.lastname) as reportsto
from employee e1
join employee e2 on e2.employeeid = e1.reportsto
where e1.employeeid in
(
select reportsto
from employee
group by reportsto
having count(*) =
(
select count(*)
from employee
group by reportsto
order by count(*) desc
limit 1
)
)
group by e1.firstname || ' ' || e1.lastname;
SELECT e.ReportsTo AS TopManagersEmployeeId, COUNT(e.ReportsTo) AS ReportedBy, m.FirstName + ' ' + m.LastName AS TopManagersName, mm.FirstName + ' ' + mm.LastName AS TheirManagersName FROM Employees e
JOIN Employees m
ON e.ReportsTo = m.EmployeeID
JOIN Employees mm
ON m.ReportsTo = mm.EmployeeID
GROUP BY e.ReportsTo, m.FirstName, m.LastName, mm.FirstName, mm.LastName
Once you have this data, you can do TOP 1 etc. You can also play around with JOIN, and make it INNER JOIN in the second set where Manager's Manager (mm) is being retrieved.

SQL Concatenate multiple rows

I'm using Teradata, I have a table like this
ID String
123 Jim
123 John
123 Jane
321 Jill
321 Janine
321 Johan
I want to query the table so I get
ID String
123 Jim, John, Jane
321 Jill, Janine, Johan
I tried partition but there can be many names.
How do I get this result. Even, to point me in the right direction would be great.
Unfortunately there's no PIVOT in Teradata (only a TD_UNPIVOT in 14.10).
If you got luck there's an aggregate UDF at your site to do a group concat (probably low possibility).
Otherwise there are two options: recursion or aggregation.
If the maximum number of rows per id is known aggregation is normally faster. It's a lot of code, but most of it is based on cut&paste.
SELECT
id,
MAX(CASE WHEN rn = 1 THEN string END)
|| MAX(CASE WHEN rn = 2 THEN ',' || string ELSE '' END)
|| MAX(CASE WHEN rn = 3 THEN ',' || string ELSE '' END)
|| MAX(CASE WHEN rn = 4 THEN ',' || string ELSE '' END)
|| ... -- repeat up to the known maximum
FROM
(
SELECT
id, string,
ROW_NUMBER()
OVER (PARTITION BY id
ORDER BY string) AS rn
FROM t
) AS dt
GROUP BY 1;
For large tables it's much more efficient when you materialize the result of the Derived Table in a Volatile Table first using the GROUP BY column as PI.
For recursion you should use a Volatile Table, too, as OLAP functions are not allowed in the recursive part. Using a view instead will repeatedly calculate the OLAP function and thus result in bad performance.
CREATE VOLATILE TABLE vt AS
(
SELECT
id
,string
,ROW_NUMBER()
OVER (PARTITION BY id
ORDER BY string DESC) AS rn -- reverse order!
,COUNT(*)
OVER (PARTITION BY id) AS cnt
FROM t
) WITH DATA
UNIQUE PRIMARY INDEX(id, rn)
ON COMMIT PRESERVE ROWS;
WITH RECURSIVE cte
(id, list, rn) AS
(
SELECT
id
,CAST(string AS VARCHAR(1000)) -- define maximum size based on maximum number of rows
,rn
FROM vt
WHERE rn = cnt
UNION ALL
SELECT
vt.id
,cte.list || ',' || vt.string
,vt.rn
FROM vt
JOIN cte
ON vt.id = cte.id
AND vt.rn = cte.rn - 1
)
SELECT id, list
FROM cte
WHERE rn = 1;
There's one problem with this approach, it might need a lot of spool which is easy to see when you omit theWHERE rn = 1.
SELECT ID,
TRIM(TRAILING ',' FROM (XMLAGG(TRIM(String)|| ',' ORDER BY String) (VARCHAR(10000)))) as Strings
FROM db.table
GROUP BY 1
SQL Server 2017+ and SQL Azure: STRING_AGG
Starting with the next version of SQL Server, we can finally concatenate across rows without having to resort to any variable or XML witchery.
STRING_AGG (Transact-SQL)
SELECT ID, STRING_AGG(String, ', ') AS Strings
FROM TableName
GROUP BY ID

Retrieve a table to tallied numbers, best way

I have query that runs as part of a function which produces a one row table full of counts, and averages, and comma separated lists like this:
select
(select
count(*)
from vw_disp_details
where round = 2013
and rating = 1) applicants,
(select
count(*)
from vw_disp_details
where round = 2013
and rating = 1
and applied != 'yes') s_applicants,
(select
LISTAGG(discipline, ',')
WITHIN GROUP (ORDER BY discipline)
from (select discipline,
count(*) discipline_number
from vw_disp_details
where round = 2013
and rating = 1
group by discipline)) disciplines,
(select
LISTAGG(discipline_count, ',')
WITHIN GROUP (ORDER BY discipline)
from (select discipline,
count(*) discipline_count
from vw_disp_details
where round = 2013
and rating = 1
group by discipline)) disciplines_count,
(select
round(avg(util.getawardstocols(application_id,'1','AWARD_NAME')), 2)
from vw_disp_details
where round = 2013
and rating = 1) average_award_score,
(select
round(avg(age))
from vw_disp_details
where round = 2013
and rating = 1) average_age
from dual;
Except that instead of 6 main sub-queries there are 23.
This returns something like this (if it were a CSV):
applicants | s_applicants | disciplines | disciplines_count | average_award_score | average_age
107 | 67 | "speed,accuracy,strength" | 3 | 97 | 23
Now I am programmatically swapping out the "rating = 1" part of the where clauses for other expressions. They all work rather quickly except for the "rating = 1" one which takes about 90 seconds to run and that is because the rating column in the vw_disp_details view is itself compiled by a sub-query:
(SELECT score
FROM read r,
eval_criteria_lookup ecl
WHERE r.criteria_id = ecl.criteria_id
AND r.application_id = a.lgo_application_id
AND criteria_description = 'Overall Score'
AND type = 'ABC'
) reader_rank
So when the function runs this extra query seems to slow everything down dramatically.
My question is, is there a better (more efficient) way to run a query like this that is basically just a series of counts and averages, and how can I refactor to optimize the speed so that the rating = 1 query doesn't take 90 seconds to run.
You could choose to MATERIALIZE the vw_disp_details VIEW. That would pre-calculate the value of the rating column. There are various options for how up-to-date a materialized view is kept, you would probably want to use the ON COMMIT clause so that vw_disp_details is always correct.
Have a look at the official documentation and see if that would work for you.
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_6002.htm
Do all most of your queries in only one. Instead of doing:
select
(select (count(*) from my_tab) as count_all,
(select avg(age) from my_tab) as avg_age,
(select avg(mypkg.get_award(application_id) from my_tab) as_avg-app_id
from dual;
Just do:
select count(*), avg(age),avg(mypkg.get_award(application_id)) from my_tab;
And then, maybe you can do some union all for the other results. But this step all by itself should help.
I was able to solve this issue by doing two things: creating a new view that displayed only the results I needed, which gave me marginal gains in speed, and in that view moving the where clause of the sub-query that caused the lag into the where clause of the view and tacking on the result of the sub-query as column in the view. This still returns the same results thanks to the fact that there are always going to be records in the table the sub-query accessed for each row of the view query.
SELECT
a.application_id,
util.getstatus (a.application_id) status,
(SELECT score
FROM applicant_read ar,
eval_criteria_lookup ecl
WHERE ar.criteria_id = ecl.criteria_id
AND ar.application_id = a.application_id
AND criteria_description = 'Overall Score' //THESE TWO FIELDS
AND type = 'ABC' //ARE CRITERIA_ID = 15
) score
as.test_total test_total
FROM application a,
applicant_scores as
WHERE a.application_id = as.application_id(+);
Became
SELECT
a.application_id,
util.getstatus (a.application_id) status,
ar.score,
as.test_total test_total
FROM application a,
applicant_scores as,
applicant_read ar
WHERE a.application_id = as.application_id(+)
AND ar.application_id = a.application_id(+)
AND ar.criteria_id = 15;

Resources