Shiny: BigQuery Fails when user selects "All" value - r

I am trying to use a BigQuery query to populate plots in Shiny. The query includes input values from the ui using selectInput. If the user selects a value that exists in the DB, such as year is 2014, the query works correctly, however, I would like the user to also be able to select "All." "All" should be a selection of all values, however, I am not sure how to express that in the query using selectInput.
server.r
data1 <- eventReactive(input$do_sql, {
bqr_auth(token = NULL, new_user = FALSE, verbose = FALSE)
query = paste('select month, event, partner_name, sum(f0_) from [dataset.table] where year =',input$year1,' and partner_name = \"',input$partner_name,'\"
GROUP by 1,2,3
ORDER by 1 asc
LIMIT 10000', sep="")
bqr_query(projectId, datasetId, query, maxResults =2000)
})
ui.r
(
selectInput("year1",
"Year:",
c("All",2014,2015
))
),
(
selectInput("partner_name",
"Partner:",
c("All",
unique(as.character(data5$partner_name))))

You should slightly change the query you are constructing
So, currently you have
SELECT month, event, partner_name, SUM(f0_)
FROM [dataset.table]
WHERE year = selected_year
AND partner_name = "selected_partner_name"
GROUP BY 1,2,3
ORDER BY 1 ASC
LIMIT 10000
with respectively:
selected_year --> input$year1
selected_partner_name --> input$partner_name
Instead, you should construct below query
SELECT month, event, partner_name, SUM(f0_)
FROM [dataset.table]
WHERE (year = selected_year OR "selected_year" = "All")
AND (partner_name = "selected_partner_name" OR "selected_partner_name" = "All")
GROUP BY 1,2,3
ORDER BY 1 ASC
LIMIT 10000
I am not shiny user at all - so excuse my syntax - below is just my
guess with regard of implementing above suggestion
query = paste('SELECT month, event, partner_name, sum(f0_)
FROM [dataset.table]
WHERE (year =',input$year1,' OR "All" ="',input$year1,'")
AND (partner_name = \"',input$partner_name,'\" OR "All" = \"',input$partner_name,'\")
GROUP by 1,2,3
ORDER by 1 asc
LIMIT 10000', sep="")

Mikhail's solution worked perfectly for character variables, but numerics didn't work correctly. I decided to use a character date range instead of the year numeric I originally used. Thanks.

Related

Getting a column with count from a nested query

as a newbie in SQL I am lost regarding nested queries.
I am trying to achieve the following: getting a table grouped by month with a count from all values of a column, then the same count filtered by status.
So for instance in January I could have the following result:
Jan 22 Count= 100 Count with Status filter= 57
I tried several variations along these lines:
SELECT
FORMAT ( [CreatedDate.table2] , 'yyyyMM' ) as create_month,
RecordTypeName__c,
count(*) as count_all,
count_filtered
FROM
(SELECT
FORMAT ( [CreatedDate] , 'yyyyMM' ) as create_month,
RecordTypeName__c,
Count(*) AS count_filtered
FROM DM_AccessNoAgg.DimLead
WHERE [CreatedDate] >= '2022-01-01'
AND [Status]='Qualifiziert'
GROUP BY RecordTypeName__c,FORMAT ( [CreatedDate] , 'yyyyMM' )
)
Basically I am using the same value in both cases, just that the second count has to be filtered. What's the best method to get this done?
Thanks for your help!
Pauline.

Earliest Time in Datetime column PowerBI

Okay so I have a table like shown...
I want to use PowerBI to create a new column called 'First_Interaction' where it will say 'True' if this was the user's earliest entry for that day. Any entry that came in after the first entry will be set to "False".
This is what I want the column to be like...
Use the following DAX formula to create a column:
First_Interaction =
VAR __userName = 'Table'[UserName]
VAR __minDate = CALCULATE( MIN( 'Table'[Datetime] ), FILTER( 'Table', 'Table'[UserName] = __userName ) )
Return IF( 'Table'[Datetime] = __minDate, "TRUE", "FALSE" )
Power BI dosnt support less than second so your DateTime Column must be a Text value. Take that on consideration for future transformation.

Replace nulls by default values in oracle

Please concern following oracle beginner's case:
Table "X" contains customer data:
ID Variable_A Variable_B Variable_C Variable_D
--------------------------------------------------
1 100 null abc 2003/07/09
2 null 2 null null
Table "Dictionary" contains what we can regard as default values for customer data:
Variable_name Default_Value
----------------------------
Variable_A 50
Variable_B 0
Variable_C text
Variable_D sysdate
The goal is to examine the row in "X" by given ID and replace null values by the default values from "Dictionary". The concrete question is about the optimal solution because, for now my own solution lies in use of looping with MERGE INTO statement which is, I think, not optimal. Also it is necessary to use flexible code without being ought to change it when new column is added into "X".
The direct way is to use
update X set
variable_a = coalesce(variable_a, (select default_value from Dictionary where name = 'Variable_A')),
variable_b = coalesce(variable_b, (select default_value from Dictionary where name = 'Variable_B')),
... and so on ...
Generally it should be fast enough.
Since you don't know which fields of table X will be null, you should provide every row with every default value. And since each field of X may be a different data type, the Dictionary table should have each default value in a field of the appropriate type. Such a layout is shown in thisFiddle.
A query which shows each row of X fully populated with either the value in X or its default becomes relatively simple.
select ID,
nvl( Var_A, da.Int_Val ) Var_A,
nvl( Var_B, db.Int_Val ) Var_B,
nvl( Var_C, dc.Txt_Val ) Var_C,
nvl( Var_D, dd.Date_Val ) Var_D
from X
join Dict da
on da.Name = 'VA'
join Dict db
on db.Name = 'VB'
join Dict dc
on dc.Name = 'VC'
join Dict dd
on dd.Name = 'VD';
Turning this into an Update statement is a little more complicated but is simple enough once you've used it a few times:
update X
set (Var_A, Var_B, Var_C, Var_D) =(
select nvl( Var_A, da.Int_Val ),
nvl( Var_B, db.Int_Val ),
nvl( Var_C, dc.Txt_Val ),
nvl( Var_D, dd.Date_Val )
from X InnerX
join Dict da
on da.Name = 'VA'
join Dict db
on db.Name = 'VB'
join Dict dc
on dc.Name = 'VC'
join Dict dd
on dd.Name = 'VD'
where InnerX.ID = X.ID )
where exists(
select 1
from X
where Var_A is null
or Var_B is null
or Var_C is null
or Var_D is null );
There is a problem with this. The default for Date types is given as sysdate which means that it will show the date and time the default table was populated not the date and time the Update was performed. This, I assume, is not what you want. You could try to make this all work using dynamic sql, but that will be a lot more complicated. Much too complicated for what you want to do here.
I see only two realistic options: either store a meaningful date as the default (9999-12-31, for example) or just know that every default for a date type will be sysdate and use that in your updates. That would be accomplished in the above Update just by changing one line:
nvl( Var_D, sysdate )
and getting rid of the last join.

Get count on a joined tables

I have two tables(oracle):
(I have marked the primary keys with a star before the column name)
Table1 Columns are :
*date,
*code,
*symbol,
price,
weight
Table2 columns are :
*descriptionID
code
symbol
date
description
I need to find the below information using query,
For a given code and a symbol on a particular day,is there any description.
for example: code = "AA" and symbol = "TEST" on 2012-4-1 on Table 1 => is there atleast one row like ID=, code ="AA", symbol ="TEST" ,date = 2012-4-1 in table 2
I tried with the below query:
select * from Table1 t1 INNER JOIN
Table2 t2
on t1.code = t2.code and t1.symbol = t2.symbol and
TO_CHAR(t1.date, 'YYYY/MM/DD') = TO_CHAR(t1.date, 'YYYY/MM/DD')
But it doesnt give me output like:
code = AA, symbol = TEST, date 2012-4-1 => descrition count = 10
code = AA, symbol = TEST, date 2012-4-2 => descrition count = 5
code = BB, symbol = HELO, date 2012-4-1 => descrition count = 20
Can some one suggest me a query which can achieve the above output.
I don't see why you need the join:
SELECT count(*)
FROM Table2
WHERE code='AA'
AND symbol = 'TEST'
AND date = to_date('2012-04-01', 'yyyy-mm-dd')
UPDATE: (after reading your comment)
I still don't see why you need the join. Do you need some data from table1 ?
Anyway, if you want the count for all the (code,symbol,date)s then why not group by ?
As for the dates, better use trunc to get rid of the time parts.
So:
SELECT code, symbol, date, count(*)
FROM Table2
GROUP BY code, symbol, date
the Trunc() Method takes a String\Date input and Creates a DATE output that is in this Format: "DD\MM\YYYY".
So Its should do exactly what you want.

Count distinct in MDX (convert from SQL query)

SELECT COUNT (DISTINCT S.PK_Submission)
FROM Fact_Submission FS, Submission S
WHERE
FS.FK_Submission = S.PK_Submission
AND FS.FK_Submission_Date >= 20100101
AND FS.FK_Submission_Date <= 20101231
I've tried this:
SELECT
{[Measures].[Fact Submission Count]} ON AXIS(0),
Distinct({[Submission].[PK Submission] }) ON AXIS(1)
FROM [Submission]
WHERE
([Date].[Calendar Year].[2010])
but the result is the same
any idea how to write this in MDX? I'm pretty new at this so still haven't figured it out.
This is correct answer:
WITH SET MySet AS
{[Measures].[Fact Submission Count]}
*
DISTINCT({ EXCEPT([Submission].[PK Submission].Members, [Submission].[PK Submission].[All]) })
MEMBER MEASURES.DistinctSubmissionCount AS
DISTINCTCOUNT(MySet)
SELECT {MEASURES.DistinctSubmissionCount} ON 0
FROM [Submission]
WHERE
([Date].[Calendar Year].[2010])
I have excluded "All" row because it's also being counted by COUNT function so I always had +1.

Resources