USQL query for SELECTING a range from a table - u-sql

Data is as
index id
1 112
1 112
2 109
2 109
3 125
3 125
4 199
4 199
5 100
5 100
The id are not incremental but are sequential in nature take it as a GUID that's why i have assigned index for Range query
The user will give #startid #endid and i will get the rows for this range
Now first i get the index corresponding these id like
#indexes = SELECT DISTINCT index
FROM #table
WHERE id IN (#startid, endid);
as a result i get (let's say for example if #startid = 2 and #endid = 4)
2
4
Now i know the Range will be BETWEEN 2 and 4 i.e. i want rows corresponding 2,3 and 4
#result= SELECT index AS index,
id AS id
FROM #data
WHERE
index BETWEEN (THE TWO ENTRIES FROM #indexes)
would have done this using Nested SELECT but USQL doesn't support it.
now is there a way to treat #indexes as a list and specify range or something???

BETWEEN is supported in U-SQL, it's just case-sensitive, eg
DECLARE CONST #startId int = 2;
DECLARE CONST #endId int = 4;
#input = SELECT *
FROM (
VALUES
( 1, 112 ),
( 1, 112 ),
( 2, 109 ),
( 2, 109 ),
( 3, 125 ),
( 3, 125 ),
( 4, 199 ),
( 4, 199 ),
( 5, 100 ),
( 5, 100 )
) AS x ( index, id );
#output =
SELECT *
FROM #input
WHERE index BETWEEN #startId AND #endId;
OUTPUT #output TO "/output/output.csv"
USING Outputters.Csv(quoting:false);
My results:
Alternative approach:
DECLARE CONST #startId int = 109;
DECLARE CONST #endId int = 199;
#input = SELECT *
FROM (
VALUES
( 1, 112 ),
( 1, 112 ),
( 2, 109 ),
( 2, 109 ),
( 3, 125 ),
( 3, 125 ),
( 4, 199 ),
( 4, 199 ),
( 5, 100 ),
( 5, 100 )
) AS x ( index, id );
#output =
SELECT i. *
FROM #input AS i
CROSS JOIN
(
SELECT MIN(index) AS minIndex,
MAX(index) AS maxIndex
FROM #input AS i
WHERE id IN ( #startId, #endId )
) AS x
WHERE i.index BETWEEN x.minIndex AND x.maxIndex;
OUTPUT #output TO "/output/output.csv"
USING Outputters.Csv(quoting:false);

Related

Rolling 4 week average conversion rate, in R or Power BI

I am trying to create a rolling 4 week average conversion rate. The column LTA is the conversion rate and equals (Appts/Leads). Right now, LTA is week by week. I need to create a new column that is a 4 rolling conversion rate.
Here is the data.
Week Leads Appts LTA
4/17/2022 205 83 40.49%
4/24/2022 126 68 53.97%
5/1/2022 117 40 34.19%
5/8/2022 82 38 46.34%
5/15/2022 60 32 53.33%
5/22/2022 45 19 42.22%
5/29/2022 25 19 76.00%
So if we started at the bottom, the RollingAvg for May 29 would be (19+19+32+38)/(25+45+60+82) = 50.943 %
For the week may 22, the numbers would roll back one week, so it'd be (19+32+38+0)/(45+60+82+117) = 29.276 %
Help would be appreciated.
transform(data.frame(lapply(df, zoo::rollsum, k=4)), roll = Appts/Leeds * 100)
Leeds Appts roll
1 530 38 7.169811
2 385 70 18.181818
3 304 89 29.276316
4 212 108 50.943396
Simple solution for a calculated column in DAX:
RollingAvg =
VAR _currentDate = [Week]
VAR _minDate = _currentDate - 4*7
RETURN
CALCULATE (
DIVIDE (
SUM ( 'Table'[Appts] ) ,
SUM ( 'Table'[Leads] )
),
// Lift filters on table to have all rows visible
ALL ( 'Table' ) ,
// Add constraints to dates for a 4-week average
'Table'[Week] <= _currentDate ,
'Table'[Week] > _minDate
)
Or better yet, a measure that doesn't take up space in the data model:
RollingAvgMeasure =
/*
Calculates the 4-week rolling average if used with the Week dimension.
Else calculates the total rolling average.
*/
VAR _currentDate = MAX ( 'Table'[Week] )
VAR _minDate = _currentDate - 4*7
VAR _movingAvg =
CALCULATE (
DIVIDE (
SUM ( 'Table'[Appts] ) ,
SUM ( 'Table'[Leads] )
),
ALL ( 'Table' ) ,
'Table'[Week] <= _currentDate ,
'Table'[Week] > _minDate
)
VAR _total = DIVIDE ( SUM ( 'Table'[Appts] ) , SUM ( 'Table'[Leads] ) )
RETURN
// Replace if-statement with only return of _movingAvg to display the latest 4-week value.
IF (
ISFILTERED ( 'Table'[Week] ),
_movingAvg ,
_total
)

How to return all rows in a column when using cross apply and DelimitedSplit8K_LEAD together in SQL Server?

This code works to return the info that I need from the column but IT IS ONLY RETURNING ONE ROW and I need it to return them all.
You can leverage the DelimitedSplit8K_LEAD function which you can read more about here. https://www.sqlservercentral.com/articles/reaping-the-benefits-of-the-window-functions-in-t-sql-2 You will also find the code for the function there.
select FirstValue = max(case when s.ItemNumber = 12 then s.Item end)
, SecondValue = max(case when s.ItemNumber = 45 then try_convert(datetime, stuff(stuff(stuff(s.Item, 9, 0, ' '), 12, 0, ':'), 15, 0, '.')) end)
from myDatabase x
cross apply DelimitedSplit8K_LEAD(x.myColumn, ',') s
where s.ItemNumber = 12
or s.ItemNumber = 45
Here is an example of the data in the column that I'm trying to return.
,505611,XXXXXXX,,,,,,,,,13M2,,,,,,,,,,,03294961,,,,,,,,,,,,,,,,,,,,,XXXXX,20220216183348,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,
Here is an example of it working, just not with using it on the table column https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=ca74e807853eb1dea445a8ffb7209b70
Okay so here is an example. Create a table in sql server database...
CREATE TABLE honda
(
user1 nvarchar(max)
);
INSERT INTO honda
(user1)
VALUES
(',523869,HXMFG-01,,,,,,,,,11M2,,,,,,,,,,,03311141,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323082041,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',523869,HXMFG-01,,,,,,,,,12M2,,,,,,,,,,,03311148,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323093049,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',523869,HXMFG-01,,,,,,,,,13M2,,,,,,,,,,,03311216,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323100350,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',523869,HXMFG-01,,,,,,,,,14M2,,,,,,,,,,,03311242,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323103854,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',523869,HXMFG-01,,,,,,,,,15M2,,,,,,,,,,,03311267,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323112420,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',527040,HXMFG-01,,,,,,,,,16M2,,,,,,,,,,,03311352,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323122930,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'),
(',527040,HXMFG-01,,,,,,,,,17M2,,,,,,,,,,,03311395,,,,,,,,,,,,,,,,,,,,,EAGLE,20220323130347,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,US,,,0000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,');
If using sql server should only have to run this middle block of code once.
CREATE FUNCTION [dbo].[DelimitedSplit8K_LEAD]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table” produces values from 0 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "zero base" and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT 0 UNION ALL
SELECT TOP (DATALENGTH(ISNULL(#pString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(#pString,t.N,1) = #pDelimiter OR t.N = 0)
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY s.N1),
Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF((LEAD(s.N1,1,1) OVER (ORDER BY s.N1) - 1),0)-s.N1,8000))
FROM cteStart s
;
GO
If using dbfiddle take the GO off of the end in the above block.
select ECI_Level = max(case when s.ItemNumber = 12 then s.Item end)
, 'DateTime' = max(case when s.ItemNumber = 45 then try_convert(datetime, stuff(stuff(stuff(s.Item, 9, 0, ' '), 12, 0, ':'), 15, 0, '.')) end)
from honda x
cross apply DelimitedSplit8K_LEAD(x.user1, ',') s
where s.ItemNumber = 12
or s.ItemNumber = 45
I was able to get it to work by adding GROUP BY like this...
select ECI_Level = max(case when s.ItemNumber = 12 then s.Item end)
, 'DateTime' = max(case when s.ItemNumber = 45 then try_convert(datetime, stuff(stuff(stuff(s.Item, 9, 0, ' '), 12, 0, ':'), 15, 0, '.')) end)
from honda x
cross apply DelimitedSplit8K_LEAD(x.user1, ',') s
where s.ItemNumber = 12
or s.ItemNumber = 45
GROUP BY user1;
The problem is that you are aggregating over the whole honda table, rather than just over the split data.
You need to place the aggregation into an APPLY
select s.*
from honda x
CROSS apply (
SELECT ECI_Level = max(case when s.ItemNumber = 12 then s.Item end)
, DateTime = max(case when s.ItemNumber = 45 then try_convert(datetime, stuff(stuff(stuff(s.Item, 9, 0, ' '), 12, 0, ':'), 15, 0, '.')) end)
FROM DelimitedSplit8K_LEAD(x.user1, ',') s
where s.ItemNumber = 12
or s.ItemNumber = 45
) s;
db<>fiddle
Personally, for just two values I wouldn't bother with a split function. Instead you can use CHARINDEX, feeding each one into the next to get the next , location.
And I guess this just shows why you shouldn't store data in a database like this in the first place.

How to replace with zero after full-stop if not have any value using regexp_substr in oracle

Values are like:
Num(column)
786.56
35
select num,regexp_substr(num,'[^.]*') "first",regexp_substr(num,'[^.]+$') "second" from cost
when i execute the above query output will be like
num first second
786.56 786 56
35 35 35
I want to print zero if not have any value after full-stop,by default second column repeating first value
There are two options here; using either the occurrence or subexpression parameters available in REGEXP_SUBSTR().
Subexpression - the 5th parameter
Using subexpressions you can pick out which group () in your match you want to return in any given function call
SQL> with the_data (n) as (
2 select 786.56 from dual union all
3 select 35 from dual
4 )
5 select regexp_substr(n, '^(\d+)\.?(\d+)?$', 1, 1, null, 1) as f
6 , regexp_substr(n, '^(\d+)\.?(\d+)?$', 1, 1, null, 2) as s
7 from the_data;
F S
--- ---
786 56
35
^(\d+)\.?(\d+)?$ means at the start of the string ^, pick a group () of digits \d+ followed by an optional \.?. Then, pick an optional group of digits at the end of the string $.
We then use sub-expressions to pick out which group of digits you want to return.
Occurrence - the 3th parameter
If you place the number in a group and forget about matching the start and end of the string you can pick the first group of numbers and the second group of numbers:
SQL> with the_data (n) as (
2 select 786.56 from dual union all
3 select 35 from dual
4 )
5 select regexp_substr(n, '(\d+)\.?', 1, 1, null, 1) as f
6 , regexp_substr(n, '(\d+)\.?', 1, 2, null, 1) as s
7 from the_data;
F S
--- ---
786 56
35
(\d+)\.? means pick a group () of digits \d+ followed by an optional .. For the first group the first occurrence is the data before the ., for the second group the second occurrence is the data after .. You'll note that you still have to use the 5th parameter of REGEXP_SUBSTR() - subexpression - to state that you want the only the data in the group.
Both options
You'll note that neither of these return 0 when there are no decimal places; you'll have to add that in with a COALESCE() when the return value is NULL. You also need an explicit cast to an integer as COALESCE() expects consistent data types (this is best practice anyway):
SQL> with the_data (n) as (
2 select 786.56 from dual union all
3 select 35 from dual
4 )
5 select cast(regexp_substr(n, '^(\d+)\.?(\d+)?$', 1, 1, null, 1) as integer) as f
6 , coalesce(cast(regexp_substr(n, '^(\d+)\.?(\d+)?$', 1, 1, null, 2) as integer), 0) as s
7 from the_data;
F S
---- ----
786 56
35 0

SQLite 3 CROSS or INTERSECT complex Subqueries

I have 3 related tables. One table has the rows that I am actually looking for, another table has the Data that I need to be Searching and the Third table describes What data I am looking for. I am getting undesired results from the following query :
SELECT * FROM names WHERE namesKey IN ( SELECT namesKey FROM data WHERE
( dataType IS 3 AND data IS 'COINCIDENCE' )
AND ( dataType IS 2 AND data IS 'STATE' )
AND ( dataType IS 1 AND data IS 'COUNTRY' ) );
I need help making a query based on Multiple rows from the filter table. I need the rows which correspond to the keys from the second table that exist on multiple rows... I am explaining badly... here is an example :
DROP TABLE IF EXISTS names ;
CREATE TABLE names (
namesKey INTEGER PRIMARY KEY ASC,
name TEXT NOT NULL
);
DROP TABLE IF EXISTS data ;
CREATE TABLE data (
dataKey INTEGER PRIMARY KEY ASC,
namesKey INTEGER NOT NULL,
dataType INTEGER NOT NULL,
data TEXT NOT NULL,
FOREIGN KEY(namesKey) REFERENCES names(namesKey)
);
DROP TABLE IF EXISTS filter ;
CREATE TABLE filter (
filterKey INTEGER PRIMARY KEY ASC,
dataType INTEGER NOT NULL,
data TEXT NOT NULL
);
INSERT INTO names( name ) VALUES ( 'name1' );
INSERT INTO names( name ) VALUES ( 'name2' );
INSERT INTO names( name ) VALUES ( 'name3' );
INSERT INTO names( name ) VALUES ( 'name4' );
INSERT INTO names( name ) VALUES ( 'name5' );
INSERT INTO names( name ) VALUES ( 'name6' );
INSERT INTO names( name ) VALUES ( 'name7' );
INSERT INTO names( name ) VALUES ( 'name8' );
INSERT INTO names( name ) VALUES ( 'name9' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 1, 1, 'COUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 1, 2, 'STATE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 1, 3, 'CITY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 2, 1, 'COUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 2, 2, 'STATE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 2, 3, 'OTHERCITY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 3, 1, 'COUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 3, 2, 'STATE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 3, 3, 'COINCIDENCE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 4, 1, 'COUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 4, 2, 'OTHERSTATE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 4, 3, 'COINCIDENCE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 5, 1, 'OTHERCOUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 5, 2, 'RANDOM' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 5, 3, 'COINCIDENCE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 6, 1, 'OTHERCOUNTRY' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 6, 2, 'OTHERSTATE' );
INSERT INTO data( namesKey, dataType, data ) VALUES ( 6, 3, 'COINCIDENCE' );
INSERT INTO filter( dataType, data ) VALUES ( 1, 'COUNTRY' );
INSERT INTO filter( dataType, data ) VALUES ( 2, 'STATE' );
INSERT INTO filter( dataType, data ) VALUES ( 3, 'COINCIDENCE' );
Now what I need is to be able to run 3 different types of queries relatively reliably.
I need to search for "No Data" and get names 7, 8, and 9
This one is Easy :
SELECT * FROM names WHERE namesKey NOT IN ( SELECT namesKey FROM data ) ;
I need to Search based on a Single type of data from the data table
Also Easy, Desired Result 3, 4, 5, and 6
SELECT * FROM names WHERE
namesKey IN ( SELECT namesKey FROM data WHERE
( dataType IS 3 AND data IS 'COINCIDENCE' ) )
;
I need to Search based on Multiple rows From The filter table. This one I don't know how to do...
Desired Result is the name3 row ONLY
I Could do it by
SELECT * FROM names WHERE
namesKey IN ( SELECT namesKey FROM data WHERE
( dataType IS 3 AND data IS 'COINCIDENCE' ) )
AND
namesKey IN ( SELECT namesKey FROM data WHERE
( dataType IS 2 AND data IS 'STATE' ) )
AND
namesKey IN ( SELECT namesKey FROM data WHERE
( dataType IS 1 AND data IS 'COUNTRY' ) )
;
But that is just Ugly with a capital UGH!
And even worse with that approach, the dataType is theoretically arbitrarily large and thus I might end up trying to string together dozens or even Hundreds of sub queries... I could run out of RAM just composing my string before even Trying to put it into the SQL.
So I am looking for a more elegant solution. Any suggestions?
If I understand you correctly, you could use:
SELECT *
FROM names
WHERE namesKey IN (SELECT namesKey
FROM data
WHERE dataType IS 3 AND data IS 'COINCIDENCE'
INTERSECT
SELECT namesKey
FROM data
WHERE dataType IS 2 AND data IS 'STATE'
INTERSECT
SELECT namesKey
FROM data
WHERE dataType IS 1 AND data IS 'COUNTRY'
);
SqlFiddleDemo
Output:
╔═══════════╦═══════╗
║ namesKey ║ name ║
╠═══════════╬═══════╣
║ 3 ║ name3 ║
╚═══════════╩═══════╝
Or using aggregation:
SELECT *
FROM names
WHERE namesKey IN (SELECT namesKey
FROM data
GROUP BY namesKey
HAVING SUM(dataType IS 3 AND data IS 'COINCIDENCE') > 0
AND SUM(dataType IS 2 AND data IS 'STATE') > 0
AND SUM(dataType IS 1 AND data IS 'COUNTRY') > 0
)
SqlFiddleDemo2
You can join the filter table directly with the actual table to get rows with matches, and then check for only those name keys where all three search terms are matching, i.e., groups whose number of matching rows is the same as the number of all search values:
SELECT namesKey
FROM data
JOIN filter USING (dataType, data)
GROUP BY namesKey
HAVING COUNT(*) = (SELECT COUNT(*) FROM filter);
Then use these name keys as usual:
SELECT *
FROM names
WHERE namesKey IN (SELECT namesKey...);

Using avg() and percentage calculation with different where clauses

I'm trying to modify the query below to achieve EXPECTED RESULT as shown at the bottom of the post. How can modify one of the queries below or both to get what I want?
This only returns total feedback records and the average_rating:
Version 1)
$qb = $this->createQueryBuilder('f')
->select('COUNT(f.id) AS total')
->addSelect('AVG(f.rating) AS average_rating')
->join('f.customers', 'c')
->where('c.guid = :guid')
->setParameter('guid', $guid)
->getQuery()
->execute();
Version 2)
$qb = $em->createQuery(
'SELECT
COUNT(f.id) AS total,
AVG(f.ratingSeller) AS average_rating
FROM
WhateverBundle:Feedback f
JOIN
WhateverBundle:Customer c
WHERE
c.guid = :guid'
)
->setParameter('guid', $guid)
->getResult();
Current Result:
Array
(
[0] => Array
(
[total] => 11
[average_rating] => 4
)
)
FEEDBACKS TABLE
ID RATING DELIVERED CHECKED CUSTOMER_ID
1 5 Y Y 12
2 4 Y N 12
3 4 Y N 12
4 5 Y Y 12
5 2 N Y 12
CUSTOMERS TABLE
GUID NAME
12 inanzzz
EXPECTED RESULT:
total = 11
average_rating = 4
delivered_percentage = 80% (should take only Y in count)
checked_percentage = 60% (should take only Y in count)
Use the CASE operator:
Version 1:
$qb = $this->createQueryBuilder('f')
->select('COUNT(f.id) AS total')
->addSelect('AVG(f.rating) AS average_rating')
->addSelect('AVG(CASE WHEN f.delivered = "Y" THEN 1 ELSE 0 END) AS delivered_percentage')
->addSelect('AVG(CASE WHEN f.checked = "Y" THEN 1 ELSE 0 END) AS checked_percentage')
->join('f.customers', 'c')
->where('c.guid = :guid')
->setParameter('guid', $guid)
->getQuery()
->execute();
Version 2:
$qb = $em->createQuery(
'SELECT
COUNT(f.id) AS total,
AVG(f.ratingSeller) AS average_rating
AVG(CASE WHEN f.delivered = "Y" THEN 1 ELSE 0 END) AS delivered_percentage
AVG(CASE WHEN f.checked = "Y" THEN 1 ELSE 0 END) AS checked_percentage
FROM
WhateverBundle:Feedback f
JOIN
WhateverBundle:Customer c
WHERE
c.guid = :guid'
)
->setParameter('guid', $guid)
->getResult();
Funny enough, if you just used the proper type="boolean" for your entity mapping, Doctrine would store the Y and N as tinyint 1 and 0, enabling you to not have to use the CASE operation. You would also be able to use PHP true and false in your setter and getter functions.

Resources