I am reading a book where I am getting stuck at few white box testing concepts. The article in the below link is taken exactly from the book.
http://testdesigners.com/testingstyles/ControlFlowTesting.html
1.
'ON-Units' terminology is introduced first in the 'Decision Coverage' article without describing what it is. The article continues to use this term later which is hard without knowing what is the meaning of ON-Unit.
Questions - Is 'ON-Unit' a path that a decision takes or traverses ?
How do one 'invoke' On units ?
2.
In the example under 'Condition Coverage'
DO K=0 TO 50 WHILE (J+K < QUEST)
The article goes to explain - "If one is using decision testing, the criterion can be satisfied by letting the loop run from K = 0 to 51, without ever exploring the circumstance where the WHILE clause becomes false"
Questions -
By the definition of decision coverage, the test cases should explore both the true and false branch of decisions at least once. In other words, K = 0 to 50 is not material because the While (J+K < Quest) is True branch and (J+K < QUEST) is false branch. Why does the article mention under decision coverage - Not exploring the While clause being false?
Also in the first part of the line in decision coverage, decision criterion is satisfied by letting the loop run from K = 0 to K = 51 that is just the true branch, decision criteria is not met alone by having a test case for True branch, why does the article say that this along is sufficient to meet the decision criteria?
First one figured out. ON-Unit is nothing but the "Entry point". To trigger them write at least one test case for the condition to traverse the decision.
Second question is still open.
Related
I' am doing my homework in programming, and I don't know how to solve this problem:
We have a set of n weights, we are putting them on a scale one by one until all weights is used. We also have string of n letters "R" or "L" which means which pen is heavier in that moment, they can't be in balance. There are no weights with same mass. Compute in what order we have to put weights on scale and on which pan.
The goal is to find order of putting weights on scale, so the input string is respected.
Input: number 0 < n < 51, number of weights. Then weights and the string.
Output: in n lines, weight and "R" or "L", side where you put weight. If there are many, output any of them.
Example 1:
Input:
3
10 20 30
LRL
Output:
10 L
20 R
30 L
Example 2:
Input:
3
10 20 30
LLR
Output:
20 L
10 R
30 R
Example 3:
Input:
5
10 20 30 40 50
LLLLR
Output:
50 L
10 L
20 R
30 R
40 R
I already tried to compute it with recursion but unsuccessful. Can someone please help me with this problem or just gave me hints how to solve it.
Since you do not show any code of your own, I'll give you some ideas without code. If you need more help, show more of your work then I can show you Python code that solves your problem.
Your problem is suitable for backtracking. Wikipedia's definition of this algorithm is
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
and
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution.
Your problem satisfies those requirements. At each stage you need to choose one of the remaining weights and one of the two pans of the scale. When you place the chosen weight on the chosen pan, you determine if the corresponding letter from the input string is satisfied. If not, you reject the choice of weight and pan. If so, you continue by choosing another weight and pan.
Your overall routine first inputs and prepares the data. It then calls a recursive routine that chooses one weight and one pan at each level. Some of the information needed by each level could be put into mutable global variables, but it would be more clear if you pass all needed information as parameters. Each call to the recursive routine needs to pass:
the weights not yet used
the input L/R string not yet used
the current state of the weights on the pans, in a format that can easily be printed when finalized (perhaps an array of ordered pairs of a weight and a pan)
the current weight imbalance of the pans. This could be calculated from the previous parameter, but time would be saved by passing this separately. This would be total of the weights on the right pan minus the total of the weights on the left pan (or vice versa).
Your base case for the recursion is when the unused-weights and unused-letters are empty. You then have finished the search and can print the solution and quit the program. Otherwise you loop over all combinations of one of the unused weights and one of the pans. For each combination, calculate what the new imbalance would be if you placed that weight on that pan. If that new imbalance agrees with the corresponding letter, call the routine recursively with appropriately-modified parameters. If not, do nothing for this weight and pan.
You still have a few choices to make before coding, such as the data structure for the unused weights. Show me some of your own coding efforts then I'll give you my Python code.
Be aware that this could be slow for a large number of weights. For n weights and two pans, the total number of ways to place the weights on the pans is n! * 2**n (that is a factorial and an exponentiation). For n = 50 that is over 3e79, much too large to do. The backtracking avoids most groups of choices, since choices are rejected as soon as possible, but my algorithm could still be slow. There may be a better algorithm than backtracking, but I do not see it. Your problem seems to be designed to be handled by backtracking.
Now that you have shown more effort of your own, here is my un-optimized Python 3 code. This works for all the examples you gave, though I got a different valid solution for your third example.
def weights_on_pans():
def solve(unused_weights, unused_tilts, placement, imbalance):
"""Place the weights on the scales using recursive
backtracking. Return True if successful, False otherwise."""
if not unused_weights:
# Done: print the placement and note that we succeeded
for weight, pan in placement:
print(weight, 'L' if pan < 0 else 'R')
return True # success right now
tilt, *later_tilts = unused_tilts
for weight in unused_weights:
for pan in (-1, 1): # -1 means left, 1 means right
new_imbalance = imbalance + pan * weight
if new_imbalance * tilt > 0: # both negative or both positive
# Continue searching since imbalance in proper direction
if solve(unused_weights - {weight},
later_tilts,
placement + [(weight, pan)],
new_imbalance):
return True # success at a lower level
return False # not yet successful
# Get the inputs from standard input. (This version has no validity checks)
cnt_weights = int(input())
weights = {int(item) for item in input().split()}
letters = input()
# Call the recursive routine with appropriate starting parameters.
tilts = [(-1 if letter == 'L' else 1) for letter in letters]
solve(weights, tilts, [], 0)
weights_on_pans()
The main way I can see to speed up that code is to avoid the O(n) operations in the call to solve in the inner loop. That means perhaps changing the data structure of unused_weights and changing how it, placement, and perhaps unused_tilts/later_tilts are modified to use O(1) operations. Those changes would complicate the code, which is why I did not do them.
I am looking for a simple method to assign a number to a mathematical expression, say between 0 and 1, that conveys how simplified that expression is (being 1 as fully simplified). For example:
eval('x+1') should return 1.
eval('1+x+1+x+x-5') should returns some value less than 1, because it is far from being simple (i.e., it can be further simplified).
The parameter of eval() could be either a string or an abstract syntax tree (AST).
A simple idea that occurred to me was to count the number of operators (?)
EDIT: Let simplified be equivalent to how close a system is to the solution of a problem. E.g., given an algebra problem (i.e. limit, derivative, integral, etc), it should assign a number to tell how close it is to the solution.
The closest metaphor I can come up with it how a maths professor would look at an incomplete problem and mentally assess it in order to tell how close the student is to the solution. Like in a math exam, were the student didn't finished a problem worth 20 points, but the professor assigns 8 out of 20. Why would he come up with 8/20, and can we program such thing?
I'm going to break a stack-overflow rule and post this as an answer instead of a comment, because not only I'm pretty sure the answer is you can't (at least, not the way you imagine), but also because I believe it can be educational up to a certain degree.
Let's assume that a criteria of simplicity can be established (akin to a normal form). It seems to me that you are very close to trying to solve an analogous to entscheidungsproblem or the halting problem. I doubt that in a complex rule system required for typical algebra, you can find a method that gives a correct and definitive answer to the number of steps of a series of term reductions (ipso facto an arbitrary-length computation) without actually performing it. Such answer would imply knowing in advance if such computation could terminate, and so contradict the fact that automatic theorem proving is, for any sufficiently powerful logic capable of representing arithmetic, an undecidable problem.
In the given example, the teacher is actually either performing that computation mentally (going step by step, applying his own sequence of rules), or gives an estimation based on his experience. But, there's no generic algorithm that guarantees his sequence of steps are the simplest possible, nor that his resulting expression is the simplest one (except for trivial expressions), and hence any quantification of "distance" to a solution is meaningless.
Wouldn't all this be true, your problem would be simple: you know the number of steps, you know how many steps you've taken so far, you divide the latter by the former ;-)
Now, returning to the criteria of simplicity, I also advice you to take a look on Hilbert's 24th problem, that specifically looked for a "Criteria of simplicity, or proof of the greatest simplicity of certain proofs.", and the slightly related proof compression. If you are philosophically inclined to further understand these subjects, I would suggest reading the classic Gödel, Escher, Bach.
Further notes: To understand why, consider a well-known mathematical artefact called the Mandelbrot fractal set. Each pixel color is calculated by determining if the solution to the equation z(n+1) = z(n)^2 + c for any specific c is bounded, that is, "a complex number c is part of the Mandelbrot set if, when starting with z(0) = 0 and applying the iteration repeatedly, the absolute value of z(n) remains bounded however large n gets." Despite the equation being extremely simple (you know, square a number and sum a constant), there's absolutely no way to know if it will remain bounded or not without actually performing an infinite number of iterations or until a cycle is found (disregarding complex heuristics). In this sense, every fractal out there is a rough approximation that typically usages an escape time algorithm as an heuristic to provide an educated guess whether the solution will be bounded or not.
Preamble:
I have been implementing my own CAT system. The resources that have helped me most are these:
An On-line, Interactive, Computer Adaptive Testing Tutorial, 11/98 -- A good explanation of how to pick a test question based on which one would return the most information. Fascinating idea, really. The equations are not illustrated with examples, however... but there is a simulation to play with. Unfortunately the simulation is down!
Computer-Adaptive Testing: A Methodology Whose Time Has Come -- This has similar equations, although it does not use IRT or the Newton-Raphson Method. It is also Rasch, not 3PL. It does, however, have a BASIC program that is far more explicit than the usual equations that are cited. I have converted portions of the program in order to get my own system to experiment with, but I would prefer to use 1PL and/or 3PL.
Rasch Dichotomous Model vs. One-parameter Logistic Model -- This clears some stuff up, but perhaps only makes me more dangerous at this stage.
Now, the question.
I want to be able to measure someone's ability level based on a series of questions that are rated at a 1PL difficulty level and of course the person's answers and whether or not they are correct.
I have to first have a function that calculates the probably of a given item. This equation gives the probability function for 1PL.
Probability correct = e^(ability - difficulty) / (1+ e^(ability - difficulty))
I'll go with this one arbitrarily for now. Using an ability estimate of 0, we get the following probabilities:
-0.3 --> 0.574442516811659
-0.2 --> 0.549833997312478
-0.1 --> 0.52497918747894
0 --> 0.5
0.1 --> 0.47502081252106
0.2 --> 0.450166002687522
0.3 --> 0.425557483188341
This makes sense. A problem targeting their level is 50/50... and the questions are harder or easier depending on which direction you go. The harder questions have a smaller chance of coming out correct.
Now... consider a test taker that has done five questions at this difficulty: -.1, 0, .1, .2, .1. Assume they got them all correct except the one that's at difficulty .2. Assuming an ability level of 0... I would want some equations to indicate that this person is slightly above average.
So... how to calculate that with 1PL? This is where it gets hard.
Looking at the equations on the various pages... I will start with an assumed ability level... and then gradually adjust it with each question after more or less like the following.
Starting Ability: B0 = 0
Ability after problem 1: B1 = B0 + [summations and function evaluated for item 1 at ability B0]
Ability after problem 2: B2 = B1 + [summations and functions evaluated for items 1-2 at ability B1]
Ability after problem 3: B3 = B2 + [summations and functions evaluated for items 1-3 at ability B2]
Ability after problem 4: B4 = B3 + [summations and functions evaluated for items 1-4 at ability B3]
Ability after problem 5: B5 = B4 + [summations and functions evaluated for items 1-5 at ability B4]
And so on.
Just reading papers on this, this is the gist of what the algorithm should be doing. But there are so many different ways to do this. The behaviour of my code is clearly wrong as I get division by zero errors... so this is where I get lost. I've messed with information functions and done derivatives, but my college level math is not cutting it.
Can someone explain to me how to do this part? The literature I've read is short on examples and the descriptions of the math appears incomplete to me. I suppose I'm asking for how to do this with a 3PL model that assumes that c is always zero and a is always 1.7 (or maybe -1.7-- whatever works.) I was trying to get to 1PL somehow anyway.
Edit: A visual guide to item response theory is the best explanation of how to do this I've seen so far, but the text gets confusing at the most critical point. I'm closer to getting this, but I'm still not understanding something. Also... the pattern of summations and functions isn't in this text like I expected.
How to do this:
This is an inefficient solution, but it works and is reasonably inituitive.
The last link I mentioned in the edit explains this.
Given a probability function, set of question difficulties, and corresponding set of evaluations-- ie, whether or not they got it correct.
With that, I can get a series of functions that will tell you the chance of their giving that exact response. Now... multiply all of those functions together.
We now have a big mess! But it's a single function in terms of the unknown ability variable that we want to find.
Next... run a slew of numbers through this function. Whatever returns the maximum value is the test taker's ability level. This can be used to either determine the standard error or to pick the next question for computer adaptive testing.
There's a problem I've encountered a lot (in the broad fields of data analyis or AI). However I can't name it, probably because I don't have a formal CS background. Please bear with me, I'll give two examples:
Imagine natural language parsing:
The flower eats the cow.
You have a program that takes each word, and determines its type and the relations between them. There are two ways to interpret this sentence:
1) flower (substantive) -- eats (verb) --> cow (object)
using the usual SVO word order, or
2) cow (substantive) -- eats (verb) --> flower (object)
using a more poetic world order. The program would rule out other possibilities, e.g. "flower" as a verb, since it follows "the". It would then rank the remaining possibilites: 1) has a more natural word order than 2), so it gets more points. But including the world knowledge that flowers can't eat cows, 2) still wins. So it might return both hypotheses, and give 1) a score of 30, and 2) a score of 70.
Then, it remembers both hypotheses and continues parsing the text, branching off. One branch assumes 1), one 2). If a branch reaches a contradiction, or a ranking of ~0, it is discarded. In the end it presents ranked hypotheses again, but for the whole text.
For a different example, imagine optical character recognition:
** **
** ** *****
** *******
******* **
* ** **
** **
I could look at the strokes and say, sure this is an "H". After identifying the H, I notice there are smudges around it, and give it a slightly poorer score.
Alternatively, I could run my smudge recognition first, and notice that the horizontal line looks like an artifact. After removal, I recognize that this is ll or Il, and give it some ranking.
After processing the whole image, it can be Hlumination, lllumination or Illumination. Using a dictionary and the total ranking, I decide that it's the last one.
The general problem is always some kind of parsing / understanding. Examples:
Natural languages or ambiguous languages
OCR
Path finding
Dealing with ambiguous or incomplete user imput - which interpretations make sense, which is the most plausible?
I'ts recursive.
It can bail out early (when a branch / interpretation doesn't make sense, or will certainly end up with a score of 0). So it's probably some kind of backtracking.
It keeps all options in mind in light of ambiguities.
It's based off simple rules at the bottom can_eat(cow, flower) = true.
It keeps a plausibility ranking of interpretations.
It's recursive on a meta level: It can fork / branch off into different 'worlds' where it assumes different hypotheses when dealing with the next part of data.
It'll forward the individual rankings, probably using bayesian probability, to dependent hypotheses.
In practice, there will be methods to train this thing, determine ranking coefficients, and there will be cutoffs if the tree becomes too big.
I have no clue what this is called. One might guess 'decision tree' or 'recursive descent', but I know those terms mean different things.
I know Prolog can solve simple cases of this, like genealogies and finding out who is whom's uncle. But you have to give all the data in code, and it doesn't seem convienent or powerful enough to do this for my real life cases.
I'd like to know, what is this problem called, are there common strategies for dealing with this? Is there good literature on the topic? Are there libraries for ideally C(++), Python, were you can just define a bunch of rules, and it works out all the rankings and hypotheses?
I don't think there is one answer that fits all the bullet points you have. But I hope my links will lead you closer to an answer or might give you a different question.
I think the closest answer is Bayesian network since you have probabilities affecting each other as I understand it, it is also related to Conditional probability and Fuzzy Logic
You also describe a bit of genetic programming as well as Artificial Neural Networks
I can name drop some more topics which might be related:
http://en.wikipedia.org/wiki/Rule-based_programming
http://en.wikipedia.org/wiki/Expert_system
http://en.wikipedia.org/wiki/Knowledge_engineering
http://en.wikipedia.org/wiki/Fuzzy_system
http://en.wikipedia.org/wiki/Bayesian_inference
In a .net project we have a group of 200 people of two types, lets say x and y, who need to be separated into groups of 7 or 8.
We have a web page where the people write other members they want to be in a group with. Each person builds a list of wanted members.
After this, there should be an algorithm to build the 7-8 member groups considering the peoples ratings, and the following condition: each group has at least 2 people of each type (x/y).
I'm pretty sure there must be a well known algorithm similar to this but didn't find one. Anyone knows how to do it?
this problem smells NP-Hard, so I suggest using Artificial Intelligence tools.
A possible approach is steepest ascent hill climbing [SAHC]
first, we will define our utility function (let it be u) as mentioned in the comments to the question. [sum of friends in group for each user]. let's define u(illegal) = -1 for illegal solution.
next,we define our 'world': S is the group of all possible solutions].
for each solution in S we define:
next(s)={all possibilities moving one person to a different group}
all we have to do now is run SAHC with random restarts:
1. best<- -INFINITY
2. while there is more time
3. choose a random legal solution
4. NEXT <- next(s)
5. if max{ U(NEXT) } < u(s): //s is the top of the hill
5.1. if u(s) > best: best <- u(s) //if s is better then the previous result - store it.
5.2. go to 2. //restart the hill climbing from a different random point.
6. else:
6.1. s <- max{ NEXT } //climb on the steepest hill.
6.2. goto 4.
7. return best //when out of time, return the best solution found so far.
It is anytime algorithm, meaning it will get a better result as you give it more time to run, and eventually [at time infinity] it will find the optimal result.