Is it possible to expect an RSpec double be called twice with varying values? - rspec-mocks

On RSpec's mock docs I found
expect(double).to receive(:msg).exactly(3).times.and_return(value1, value2, value3)
# returns value1 the first time, value2 the second, etc`
If I do the same with parameters, for ex.
expect(double).to receive(:msg).exactly(3).times.with(value1, value2, value3)
RSpec naturally expects msg to be called with value1, value2, value3 three times.
Is there a way to say called the first time with value1, the second time with value 2, etc?

Try using .ordered, like this:
expect(double).to receive(:msg).with(value1).ordered
expect(double).to receive(:msg).with(value2).ordered
expect(double).to receive(:msg).with(value3).ordered

Related

Count(case when) redshift sql - receiving groupby error

I'm trying to do a count(case when) in Amazon Redshift.
Using this reference, I wrote:
select
sfdc_account_key,
record_type_name,
vplus_stage,
vplus_stage_entered_date,
site_delivered_date,
case when vplus_stage = 'Lost' then -1 else 0 end as stage_lost_yn,
case when vplus_stage = 'Lost' then 2000 else 0 end as stage_lost_revenue,
case when vplus_stage = 'Lost' then datediff(month,vplus_stage_entered_date,CURRENT_DATE) else 0 end as stage_lost_months_since,
count(case when vplus_stage = 'Lost' then 1 else 0 end) as stage_lost_count
from shared.vplus_enrollment_dim
where record_type_name = 'APM Website';
But I'm getting this error:
[42803][500310] [Amazon](500310) Invalid operation: column "vplus_enrollment_dim.sfdc_account_key" must appear in the GROUP BY clause or be used in an aggregate function; java.lang.RuntimeException: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: column "vplus_enrollment_dim.sfdc_account_key" must appear in the GROUP BY clause or be used in an aggregate function;
Query was running fine before I added the count. I'm not sure what I'm doing wrong here -- thanks!
You can not have an aggregate function (sum, count etc) without group by
The syntax is like this
select a, count(*)
from table
group by a (or group by 1 in Redshift)
In your query you need to add
group by 1,2,3,4,5,6,7,8
because you have 8 columns other than count
Since I don't know your data and use case I can not tell you it will give you the right result, but SQL will be syntactically correct.
The basic rule is:
If you are using an aggregate function (eg COUNT(...)), then you must supply a GROUP BY clause to define the grouping
Exception: If all columns are aggregates (eg SELECT COUNT(*), AVG(sales) FROM table)
Any columns that are not aggregate functions must appear in the GROUP BY (eg SELECT year, month, AVG(sales) FROM table GROUP BY year, month)
Your query has a COUNT() aggregate function mixed-in with non-aggregate values, which is giving rise to the error.
In looking at your query, you probably don't want to group on all of the columns (eg stage_lost_revenue and stage_lost_months_since don't look like likely grouping columns). You might want to mock-up a query result to figure out what you actually want from such a query.

Split column value in sqlite

Am new to sqlite in my learning I come across the subString function so in my exercise, My table name is t1 and my column value is Partha000099 I want to increment by 1 eg., Partha000100 when i try with
SELECT SUBSTR(MAX(ID),6) FROM t1
am getting output as 000099 when I increment by 1 with the below query
SELECT SUBSTR(MAX(ID),6)+1 FROM t1
am getting output as 100, Now my question is how to construct it back as I expect
I tried with the below query,
SELECT 'Partha' || SUBSTR(MAX(ID),6)+1 FROM t1
am getting output as 1. Please some one help me.
While my solution will work, I would advice you against this type of key generation. "SELECT MAX(ID)+1" to generate the next key will be fraught with problems in more concurrent databases and you risk generating duplicate keys in a busy application/system.
It would be better to split the key into two columns, one with the group or name 'Partha', and the other column with an automatically incremented number.
However, having said that, here's how to generate the next key like your example.
You need to:
Split the key into two
Increment the numeric part
Convert it back to a string
Pad it to 6 digits
Here's the SQL that will do that:
SELECT SUBSTR(ID, 1, 6) || SUBSTR('000000' || (SUBSTR(MAX(ID), 7)+1), -6) FROM t1;
To pad it to 6 digits, I prepend 6 zeroes, then grab the last 6 digits from the resulting string with this type of expression
SUBSTR(x, -6)
The reason why you got 1 was that your expression was grouped like this:
SELECT .... + 1
And the .... part, your string concatenation, was then attempted converted to a number, which resulted in 0, thus 0+1 gives 1.
To get the unpadded result you could've just added some parenthesis:
SELECT 'Partha' || (SUBSTR(MAX(ID),6)+1) FROM t1
^ ^
This, however, would also be wrong as it would return Partha1, and that is because SUBSTR(..., 6) grabs the 6th character and onwards and the 6th character is the final a in Partha, so to get Partha100 you would need this:
SELECT 'Partha' || (SUBSTR(MAX(ID),7)+1) FROM t1
^

Oracle complex update statement

I have a table where data is as given below
My requirement is to update this table in such a way that, within a group (grouping will be done based on column A), if there is value in column B, same value should be updated to other rows in column B having null values within that group. If column B have null value for all the records within that group, then new sequence should be generated.Also I can't use pl/SQL block for this. I need to write a SQL query to perform this
My expected output is given below
You won't be able to use the sequence_name.nextval directly in your update statement, as the value will increase with every row, meaning that you would end up with different values in your b column for each a value.
The best way round that I can think of doing this is to first of all ensure every set of all-null b values has a single value in it, which you can do as follows:
merge into t1 tgt
using (select a,
b,
rid,
row_number() over (partition by a order by b) rn
from (select a,
b,
rowid rid,
max(b) over (partition by a) max_b
from t1)
where max_b is null) src
on (tgt.rowid = src.rid and src.rn = 1)
when matched then
update set tgt.b = t1_seq.nextval;
This finds the rows which have all the b values as null for a given a, and then updates one of them to have the next sequence value.
Once you've done that, you can then go ahead and populate the null values based on the max b value for that group, like so:
update t1
set b = (select max(b) from t1 t2 where t1.a = t2.a)
where b is null;
See this LiveSQL script for evidence that this works.
Something like this:
update table t1
set B = (select nvl(max(b),sequence_name.nextval) from table where a=t1.a)
Ps: I couldn't test this.
Indeed we can't use sequences in correlated subqueries... :(
One workaround is the use of merge :
merge into teste t1
using (select max(b) as m,a from teste group by a) t2
on (t1.a=t2.a)
when matched then update set b= nvl(t2.m,seq_teste.nextval);
One thing: that nextval will ALWAYS be consumed even when it won't be inserted. If you don't want that, you might need some pl/sql code.

Creating new variable that counts a certain string in other variable in r

I want to create a new variable called Chatid that gets +1 for each time Chat ID: ^^^^^^ appears in the Lead variable.
This is how the .csv looks like now
Lead,Event,Role,Data
Chat ID: ^^^^^^,,,
No Value,x,Lead,No Value
No Value,x,End-user,No Value
Man,Lead x,Lead,No Value
Man,x,Lead,No Value
Man,x,Lead,Hello
Man,x,Lead,No Value
No Value,x,End-user,Hello to you too
Man,x,Lead,how are you?
Chat ID: ^^^^^^,,,
No Value,x,Lead,No Value
No Value,x,End-user,No Value
Man,x,Lead,No Value
Man,x,Lead,Hello, how are you?
Man,x,Lead,No Value
Man,x,Lead,No Value
Man,x,Lead,Can i help you?
No Value,x,End-user,Goodmorning!
How it should look like after write.csv
Chatid, Lead,Event,Role,Data
1,Chat ID: ^^^^^^,,,
1,No Value,x,Lead,No Value
1,No Value,x,End-user,No Value
1,Man,Lead x,Lead,No Value
1,Man,x,Lead,No Value
1,Man,x,Lead,Hello
1,Man,x,Lead,No Value
1,No Value,x,End-user,Hello to you too
1,Man,x,Lead,how are you?
2,Chat ID: ^^^^^^,,,
2,No Value,x,Lead,No Value
2,No Value,x,End-user,No Value
2,Man,x,Lead,No Value
2,Man,x,Lead,Hello, how are you?
2,Man,x,Lead,No Value
2,Man,x,Lead,No Value
2,Man,x,Lead,Can i help you?
2,No Value,x,End-user,Goodmorning!
This way I want to make it possible to analyse each separate chat (if this is the best way to separate the different chats).
You could use grepl to find the occurrences of the string of interest, and then cumsum over the output. For a data.frame called df:
df <- cbind(
Chatid = cumsum(grepl("Chat ID", df$lead)),
df
)

SQLite returning data in custom order

I'm using an SQLite query (in an iOS application) as follows:
SELECT * FROM tblStations WHERE StationID IN ('206','114','113','111','112','213','214','215','602','603','604')
However, I'm getting the resulting data in either descending or ascending order, when what I really want is for the data to be returned in the order I've specified in the IN clause.
Is this possible?
A trivial way to sort the results
NSArray *stationIDs = #[#206,#114,#113,#111,#112,#213,#214,#215,#602,#603,#604];
NSArray *stations = #[#{#"Id":#(604)},#{#"Id":#(603)},#{#"Id":#(602)},#{#"Id":#(215)},
#{#"Id":#(214)},#{#"Id":#(213)},#{#"Id":#(112)},#{#"Id":#(111)},
#{#"Id":#(113)},#{#"Id":#(114)},#{#"Id":#(206)}];
stations = [stations sortedArrayUsingComparator:
^NSComparisonResult(NSDictionary * dict1, NSDictionary *dict2)
{
NSUInteger index1 = [stationIDs indexOfObject:dict1[#"Id"]];
NSUInteger index2 = [stationIDs indexOfObject:dict2[#"Id"]];
return [#(index1) compare:#(index2)];
}];
You could use a CASE expression to map these station IDs to another value that is suitable for sorting:
SELECT *
FROM tblStations
WHERE StationID IN ('206','114','113','111','112',
'213','214','215','602','603','604')
ORDER BY CASE StationID
WHEN '206' THEN 1
WHEN '114' THEN 2
WHEN '113' THEN 3
WHEN '111' THEN 4
WHEN '112' THEN 5
WHEN '213' THEN 6
WHEN '214' THEN 7
WHEN '215' THEN 8
WHEN '602' THEN 9
WHEN '603' THEN 10
WHEN '604' THEN 11
END
I don't believe there's any means of returning SQL data in an order that isn't ascending, descending or random (either intentionally so, or simply in the order the database engine chooses to return the data).
As such, it would probably make sense to simply fetch all of the data returned by the SQLite query and store it in an NSDictionary keyed on the StationID value. It would then be trivial to retrieve in the order you require.
add an additional column to use for sorting. e.g. add a column named "sortMePlease". Fill this column according to your needs, meaning for the row for stationID 216 enter 1, for 114 enter 2, .... and finally add "ORDER BY sortMePlease ASC" to your query.
A second way of doing it (the first one being with CASE WHEN ... THEN END as already stated in another answer) is:
ORDER BY StationID=206 DESC,
StationID=114 DESC,
StationID=113 DESC,
StationID=111 DESC,
StationID=112 DESC,
StationID=213 DESC,
etc.

Resources