Compare stored values in Selenium IDE - automated-tests

I am new to test automation and to Selenium IDE. With Selenium IDE, I want to store two values(integer) and compare them. Test passes if the compared result is greater than or equal to zero. So far, I only found an option to store the values and wondering if there is any option to compare the stored values.
Any suggestion would be helpful.
Thanks

Okay, assuming you're always subtracting A (constant value) from B(variable value), you can use some javascript to perform the test.
store | 2 | A
store | 4 | B
storeEval | var s = false; s = eval((storedVars['B'] - storedVars['A']) >=0); | s
verifyExpression | ${s}
replace the two store steps above with whatever you use to get your variables A and B.
The verifyExpression line will pass(return true) if result is greater than or equal to zero, will fail(stay false) if not.

store |2| A
store |4| B
storeEval |var s = false; s = eval((storedVars['B'] - storedVars['A']) >=0);| s
echo |${s}|
Executing: |store | 2 | A |
Executing: |store | 4 | B |
Executing: |storeEval | var s = false; s = eval((storedVars['B'] - storedVars['A']) >=0); | s |
script is: var s = false; s = eval((storedVars['B'] - storedVars['A']) >=0);
Executing: |echo | ${s} | |
echo: true
Test case passed

Related

Kusto query calculate 2 metric fields

I'm doing a query in Kusto on Azure to bring the memory fragmentation value of Redis, this value is obtained by dividing the RSS memory by the memory used, the problem is that I am not able to do the calculation using these two different fields because it is necessary to filter the value of the "Average" field of the "usedmemoryRss" and "usedmemory" fields when I do the filter on the extend line the query returns no value, the code looks like this:
AzureMetrics
| extend m1 = Average | where MetricName == "usedmemoryRss" and
| extend m2 = Average | where MetricName == "usedmemory"
| extend teste = m1 / m2
When I remove the "where" clauyse from the lines it divides the value of each record by itself and return 1. Is it possible to do that? Thank you in advance for your help.
Thanks for the answer Justin you gave me an idea and i solved this way
let m1 = AzureMetrics | where MetricName == "usedmemoryRss" | where Average != 0 | project Average;
let m2 = AzureMetrics | where MetricName == "usedmemory" | where Average != 0 | project Average;
print memory_fragmentation=toscalar(m1) / toscalar(m2)
let Average=datatable (MetricName:string, Value:long)
["usedmemoryRss", 10,
"usedmemory", "5"];
let m1=Average
| where MetricName =="usedmemoryRss" | project Value;
let m2=Average
| where MetricName =="usedmemory" | project Value;
print teste=toscalar(m1) / toscalar (m2)

Parse data in Kusto

I am trying to parse the below data in Kusto. Need help.
[[ObjectCount][LinkCount][DurationInUs]]
[ChangeEnumeration][[88][9][346194]]
[ModifyTargetInLive][[3][6][595903]]
Need generic implementation without any hardcoding.
ideally - you'd be able to change the component that produces source data in that format to use a standard format (e.g. CSV, Json, etc.) instead.
The following could work, but you should consider it very inefficient
let T = datatable(s:string)
[
'[[ObjectCount][LinkCount][DurationInUs]]',
'[ChangeEnumeration][[88][9][346194]]',
'[ModifyTargetInLive][[3][6][595903]]',
];
let keys = toscalar(
T
| where s startswith "[["
| take 1
| project extract_all(#'\[([^\[\]]+)\]', s)
);
T
| where s !startswith "[["
| project values = extract_all(#'\[([^\[\]]+)\]', s)
| mv-apply with_itemindex = i keys on (
extend Category = tostring(values[0]), p = pack(tostring(keys[i]), values[i + 1])
| summarize b = make_bag(p) by Category
)
| project-away values
| evaluate bag_unpack(b)
--->
| Category | ObjectCount | LinkCount | DurationInUs |
|--------------------|-------------|-----------|--------------|
| ChangeEnumeration | 88 | 9 | 346194 |
| ModifyTargetInLive | 3 | 6 | 595903 |

Alert on error rate exceeding threshold using Azure Insights and/or Analytics

I'm sending customEvents to Azure Application Insights that look like this:
timestamp | name | customDimensions
----------------------------------------------------------------------------
2017-06-22T14:10:07.391Z | StatusChange | {"Status":"3000","Id":"49315"}
2017-06-22T14:10:14.699Z | StatusChange | {"Status":"3000","Id":"49315"}
2017-06-22T14:10:15.716Z | StatusChange | {"Status":"2000","Id":"49315"}
2017-06-22T14:10:21.164Z | StatusChange | {"Status":"1000","Id":"41986"}
2017-06-22T14:10:24.994Z | StatusChange | {"Status":"3000","Id":"41986"}
2017-06-22T14:10:25.604Z | StatusChange | {"Status":"2000","Id":"41986"}
2017-06-22T14:10:29.964Z | StatusChange | {"Status":"3000","Id":"54234"}
2017-06-22T14:10:35.192Z | StatusChange | {"Status":"2000","Id":"54234"}
2017-06-22T14:10:35.809Z | StatusChange | {"Status":"3000","Id":"54234"}
2017-06-22T14:10:39.22Z | StatusChange | {"Status":"1000","Id":"74458"}
Assuming that status 3000 is an error status, I'd like to get an alert when a certain percentage of Ids end up in the error status during the past hour.
As far as I know, Insights cannot do this by default, so I would like to try the approach described here to write an Analytics query that could trigger the alert. This is the best I've been able to come up with:
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize failures = sum(isError), successes = sum(1 - isError) by timestamp bin = 1h
| extend ratio = todouble(failures) / todouble(failures+successes)
| extend failure_Percent = ratio * 100
| project iff(failure_Percent < 50, "PASSED", "FAILED")
However, for my alert to work properly, the query should:
Return "PASSED" even if there are no events within the hour (another alert will take care of the absence of events)
Only take into account the final status of each Id within the hour.
As the request is written, if there are no events, the query returns neither "PASSED" nor "FAILED".
It also takes into account any records with Status == 3000, which means that the example above would return "FAILED" (5 out of 10 records have Status 3000), while in reality only 1 out of 4 Ids ended up in error state.
Can someone help me figure out the correct query?
(And optional secondary questions: Has anyone setup a similar alert using Insights? Is this a correct approach?)
As mentioned, since you're only querying on a singe hour your don't need to bin the timestamp, or use it as part of your aggregation at all.
To answer your questions:
The way to overcome no data at all would be to inject a synthetic row into your table which will translate to a success result if no other result is found
If you want your pass/fail criteria to be based on the final status for each ID, then you need to use argmax in your summarize - it will return the status corresponding to maximal timestamp.
So to wrap it all up:
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize argmax(timestamp, isError) by tostring(customDimensions.Id)
| summarize failures = sum(max_timestamp_isError), successes = sum(1 - max_timestamp_isError)
| extend ratio = todouble(failures) / todouble(failures+successes)
| extend failure_Percent = ratio * 100
| project Result = iff(failure_Percent < 50, "PASSED", "FAILED"), IsSynthetic = 0
| union (datatable(Result:string, IsSynthetic:long) ["PASSED", 1])
| top 1 by IsSynthetic asc
| project Result
Regarding the bonus question - you can setup alerting based on Analytics queries using Flow. See here for a related question/answer
I'm presuming that the query returns no rows if you have no data in the hour, because the timestamp bin = 1h (aka bin(timestamp,1h)) doesn't return any bins?
but if you're only querying the last hour, i don't think you need the bin on timestamp at all?
without having your data it's hard to repro exactly but... you could try something like (beware syntax errors):
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize totalCount = count(), failures = countif(isError == 1), successes = countif(isError ==0)
| extend ratio = iff(totalCount == 0, 0, todouble(failures) / todouble(failures+successes))
| extend failure_Percent = ratio * 100
| project iff(failure_Percent < 50, "PASSED", "FAILED")
hypothetically, getting rid of the hour binning should just give you back a single row here of
totalCount = 0, failures = 0, successes = 0, so the math for failure percent should give you back 0 failure ratio, which should get you "PASSED".
without being to try it i'm not sure if that works or still returns you no row if there's no data?
for your second question, you could use something like
let maxTimestamp = toscalar(customEvents where timestamp > ago(1h)
| summarize max(timestamp));
customEvents | where timestamp == maxTimestamp ...
// ... more query here
to get just the row(s) that have that have a timestamp of the last event in the hour?

Avoid running a function multiple times in a query

I have the following query in Application Insights where I run the parsejson function multiple times in the same query.
Is it possible to reuse the data from the parsejson() function after the first invocation? Right now I call it three times in the query. I am trying to see if calling it just once might be more efficient.
EventLogs
| where Timestamp > ago(1h)
and tostring(parsejson(tostring(Data.JsonLog)).LogId) =~ '567890'
| project Timestamp,
fileSize = toint(parsejson(tostring(Data.JsonLog)).fileSize),
pageCount = tostring(parsejson(tostring(Data.JsonLog)).pageCount)
| limit 10
You can use extend for that:
EventLogs
| where Timestamp > ago(1h)
| extend JsonLog = parsejson(tostring(Data.JsonLog)
| where tostring(JsonLog.LogId) =~ '567890'
| project Timestamp,
fileSize = toint(JsonLog.fileSize),
pageCount = tostring(JsonLog.pageCount)
| limit 10

Sumtotal in ReportViewer

+----------+------------+------+------+--------------+---------+---------+
| | SUBJ | MIN | MAX | RESULT | STATUS | PERCENT |
| +------------+------+------+--------------+---------+---------+
| | Subj1 | 35 | 100 | 13 | FAIL | 13.00% |
|EXAM NAME | Subj2 | 35 | 100 | 63 | PASS | 63.00% |
| | Subj3 | 35 | 100 | 35 | PASS | 35.00% |
| +------------+------+------+--------------+---------+---------+
| | Total | 105 | 300 | 111 | PASS | 37.00% |
+----------+------------+------+------+--------------+---------+---------+
This is my report viewer report format.The SubTotal row counts the
total of all the above column.Every thing is fine. But in the status
column its showing Pass. I want it to show fail if there is single
fail in the status column. I am generating Status if Result < Min then
it is fail or else it is pass. Now how to change the SubTotal row
below depending upon the condition. And is there any way to show the
Subtotal row directly from database. Any suggestion.
The easiest way to do this would be to use custom code (right-click non-display area of report, choose Properties and click the Code tab) - calculate the pass/fail score in the detail, display it in the group footer and reset it in the group header:
Dim PassFail As String
// Reset Pass or Fail status in group header
Public Function ResetAndDisplayStatusTitle() AS String
PassFail = "PASS" // Initialise status to pass
ResetAndDisplayStatusTitle = "Status"
End Function
// Calculate pass/fail on each detail row and remember any fails
Public Function CalculatePassFail() As String
Dim ThisResult As String
// Calculate whether this result is pass or fail
If Fields!Result.Value < Fields!Min.Value Then
ThisResult = "FAIL"
Else
ThisResult ="PASS"
End If
// Remember any failure as overall failure
If ThisResult = "FAIL" Then PassFail = "FAIL"
CalculatePassFail = ThisResult
End Function
Then you tie in the custom code to your cells in your table as follows:
In the value for the status column in your group header you put:
=Code.ResetAndDisplayStatusTitle()
In the value for the status column in the detail row you put:
=Code.CalculatePassFail()
In the value for the status column in the group footer you put:
=Code.PassFail
With respect to getting the subtotal row from the database directly from the database, there are a couple of ways depending on what result you are after.
Join the detail row to a subtotalling row in your SQL (so that the subtotal fields appear on every row in the dataset) and use those fields.
Again, use custom code (but this is probably overly complicated for subtotalling)
However, these tricks are only for strange circumstances and in general the normal out-of-the-box subtotalling can be tweaked to give the result you are after. If there is something specific you want to know, it is probably best to explain the problem in a separate question so that issue can be dealt with individually.

Resources