CodedUI testing of a web page using grids - collections

At work we have just started using CodedUI, in our product there are a lot of data grids, and while the CodedUI UIMap recorder is capable of picking out individual elements, it does not seem to be able to pick out collections of elements, such as returning a list giving each cell in a column or row, or even more usefully a list of lists, so you can navigate the data in a way that is sensitive to context - I may be interested for instance in checking that the fourth column is always equal to the sum of the second and third.
Is there any way to do this sort of search in CodedUI? So far the only search methods I have come across are those used by the UIMap recorder itself, which should only ever return a single object. Without this I am finding it difficult to make any tests that are particularly useful...

There is method UITestControl.FindMathingControls which returns collection of elements satisfying conditions you set.
As far as I understand this method gives you what you need.

Related

How to test two numbers that appear on different pages?

Have a stored procedure that produces a number--let's say 50, that is rendered as an anchor with the number as the text. When the user clicks the number, a popup opens and calls a different stored procedure and shows 50 rows in a html table. The 50 rows are the disaggregation of the number the user clicked. In summary, two different aspx pages and two different stored procedures that need to show the same amount, one amount is the aggregate and the other the disaggregation of the aggregate.
Question, how do I test this code so I know that if the numbers do not match, there is an error somewhere.
Note: This is a simplified example, in reality there are 100s of anchor tags on the page.
This kind of testing falls outside of the standard / code level testing paradigm. Here you are explicitly validating the data and it sounds like you need a utility to achieve this.
There are plenty of environments to do this and approaches you can take, but here's two possible candidates
SQL Management Studio : here you can generate a simply script that can run through the various combinations from the two stored procedures ensuring that the number and rows match up. This will involve some inventive T-SQL but nothing particular taxing. The main advantage of this approach is you'll have bare metal access to the data.
Unit Testing : as mentioned your problem is somewhat outside of the typical testing scenario where you would oridnarily Mock the data and test into your Business Logic. However, that doesn't mean you cannot write the tests (especially if you are doing any Dataset manipulation prior to this processing. Check out this link and this one for various approaches (note: if you're using VS2008 or above, you get the Testing Projects built in from the Professional Version up).
In order to test what happens when the numbers do not match, I would simply change (temporary) one of the stored procedure to return the correct amount +1, or always return zero, etc.

Remove Selected Items from Search Results

Use Case:
End-User searches for something and an ArrayCollection is returned with Result objects. This is displayed in a data grid.
End-User selects a few of the search results and "moves" it over to another datagrid for use later.
End-User does another search.
PROBLEM:
Some of the search results might contain something the user already previously selected and moved over to the second datagrid. I want to remove these from the second search result.
How can I do this quickly, and efficiently in Flex code?
disableAutoUpdate() on both array collection
loop through the first one and for each item of the second remove it if it's present in the first one (or adapt the algorithm based on what you really want - unsure)
enableAutoUpdate() at the end.
Looping through array collection can be quick if no events are dispatched.
Second option, you could also loop through a cheap copy made up of an array, which is arraycollection.source.concat(), or even a vector if all your items are of the same type. That will give the maximum speed, but you might lose in the long run as you need to convert back to an array collection at the end.
So I would stick to the first option.
For the time being, I've implemented a hash collection (extends ArrayCollection). Hash only allows unique values, so in the end, it serves my purpose even though the UI might be confusing to the user. Will probably implement the above method at a later date. :)

Strategy for handling variable time queries?

I have a typical scenario that I'm struggling with from a performance standpoint. The user selects a value from a dropdown and clicks a button. A stored procedure takes that value as an input parameter, executes, and returns the results to a grid. For just one of the values ('All'), the query runs for roughly 2.5 minutes. For the rest of the values the query runs less than 1ms.
Obviously, having the user wait for 2.5 minutes just isn't going to fly. So, what are some typical strategies to handle this?
Some of my own thoughts:
New table that stores the information for the 'All' value and is generated nightly
Cache the data on the caching server
Any help is appreciated.
Thanks!
Update
A little bit more info:
sp returns two result sets. The first is a group by rollup summary and the second is the first result set, disaggregated (roughly 80,000 rows).
I would first look at if your have the proper indexes in place. Using the Query Analyzer and the Database Tuning Assistant is a simple and often effective way of seeing what indexes might help.
If you still have performance problems after creating the appropriate indexes you might then look at adding tables/views to speed things up. If your query does a lot of joins you might consider creating an indexed view that allows you to do a select with no joins on the denormalized data. Since indexed views are persisted you can see big gains from their use.
You can read up on indexed views here:
http://msdn.microsoft.com/en-us/library/dd171921%28v=sql.100%29.aspx
and read about the database tuning adviser here:
http://msdn.microsoft.com/en-us/library/ms166575.aspx
Also, how many records does "All" return? I have seen people get hung up on the "All" scenario before, but if it returns 1 million records or something then the data is not usable to a person anyways...
Caching data is a good thing, but.... if the SP is inherently flawed, then you might want to actually fix it instead of trying to bandage it with caching.
You might also want to (since you didn't mention here) look at the number of rows "All" returns compared to the other selections and think about your indexes.
Also in your SP does the "All" cause it to run a different sets of tsql as in maybe a case or an if... or is it running the same code just with a different "WHERE"?
It might simply be that "ALL" just returns A LOT of records. You may want to implement paging and partial dataset return using ajax... (kinda like return the first 1000 records early so that it can be displayed and also show a throbber on the screen while the rest of the dataset is returned)
These are all options... if the number of records really isnt that different between ALL and the others... then it probably has something to do with the query/index/program flow.

How much data is too much for an array?

I have a flex webapp that retrieves some names & addresses from a database. Project works fine but I'd like to make it faster. Instead of making a call to the database for each name request, I could pre-load all names into an array & filter the array when the user makes a request. Before I go down this route though I wanted to check if it is even feasible to have an application w/ 50,000 or 1 million elements in an array? What is the limit b/f it slows down the app? (I anticipate that it will have a lot to do w/ what else is going on in my app but for this sake lets just assume the app ONLY consists of this huge array).
Searching through a large array can be slower than necessary, particularly if you're talking about 1 million records.
Can you split it into a few still-large-but-smaller arrays? If you're always searching by account number, then divide them up based on the first digit or two digits.
To directly answer your question though, pure AS3 processing of a 50,000 element array should be fine. Once you get over 250,000 I'd think you need to break it up.
Displaying that many UI elements is different though. If you try to bind a chart to a dataProvider with 10,000 elements, it's too much. Same for a list or datagrid.
But for pure model data, not ui bound, I'd recommend up to 250,000 in my experience.
If your loading large amounts of data (not sure if your using Lists though), you could check out James Wards post about using AsyncListView with paging to grab the data in chuncks as its needed. Gonna try and implement something like this soon. His runnable example uses 100,000 rows with paging of 100 (works with HttpService/AMF type calls):
http://www.jamesward.com/2010/10/11/data-paging-in-flex-4/
Yes, you could probably stuff a few million items in an array if you wanted to, and the Flash player wouldn't yell at you. But do you really want to?
Is the application going to take longer to start if it has to download the entire database locally before being able to work? If the additional time needed to download that much data isn't significant, are a few database lookups really worth optimizing?
If you have a good use case to do this, you're going to have to pay attention to the way you use those data structures. Looping over the array to find an item is going to be a bit slow, so you'll want to create indexes locally, most likely by using a few hash structures. The more flexible you allow the search queries to be, the more interesting the indexing issues will be.

Dojo datagrid and treegrid help - datagrid has a reformating flash?

I'm having a bit of a time trying to get Dojo grids (1.5) to play nice. Specifically I've spent about two weeks of work trying to implement a grid that allows for our result set data to collapse into rows, where rows can be expanded. Data comes in as a full set in JSON format, using ItemFileReadStore as the store. Any subsequent sorts or pagings are handled by GETing a new json from the application, and passing in new query parameters in the url.
The nested data was only two layers deep - a top layer to always be displayed and an array of child data with identical structure as the top layer. Each node has a unique ID and a cluster ID - on a parent node the unique ID and cluster ID will match.
I was initially very excited with TreeGrid - but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' and one extra row full of null cells (???) that I just couldnt figure out how to remove unless I focused the query to only one cluster. I studied the test examples, built many test pages myself, tried to understand the forestModel, which for what I could tell was unnecessary... I found so little documentation, and sources I found online hinted that TreeGrid might not be reliable...
So I decided I would try to implement the expandable/collapsible rows in dataGrid.
I flattened the JSON data and added another attribute to indicate being a top level node ('alwaysShow' = true). I built my grid programaticaly and applied grid.filter() to pull only those top level nodes. I modified that filter by extending the ItemFileReadStore _FetchItems "filter" method to allow for OR querying instead of AND, and also modified it to allow for keys to point to arrays - when a top level node (small +/- icon in the cell) is clicked, the cluster ID of the parent node is added to the grid.filter.allowed[] and the filter is updated, allowing nodes with that cluster_id value to be displayed.
This worked fine on my small test set of five records (although id say a little slugish...) - but now I am pulling ~900 rows back from the application, and on expanding large clusters (~80 rows) I am seeing a very long flash of blue and white on the filter updates. I've spent most of my day trying to step through in firebug to find where its happening, but the dojo logic is so spread out. Seems to be happening before the call to _Grid.js defaultUpdate.
Its so bad that I am considering trying again with TreeGrid. Im also considering just doing this by hand... Im kicking myself for spending so much time trying to get Dojo to work to begin with. I would also consider a commercial "JSON->table with collapsible row" library if anyone has any recommendations...
Any suggestions or insights? Familiarity with the flashing problem or how I could adapt TreeGrid to my needs? I'm aware this is a bit of a rant... Many thanks for any help.
-robbie
EDIT: I eventually gave up trying to get Dojo to do what I needed and coded it myself in less than a day. Not the best use of three weeks...
EDIT:
I just found a solution that works for me, I have added the following CSS:
.dojoxGridSummaryRow {
visibility: collapse
}
Basically the summaries are probably still created but they are not visible nor taken into account in the table layout. That's good for me. Hope this will solve your issue.
This won't help but just to let you know that:
"- but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' "
is the very EXACT same thing I'm trying to achieve and did not find the solution even though this looks like a very simple feature... Will let you know if I found a solution...

Resources