I got confused by the example of X.691-0207 page 49.
As per clause B.2.2.10 (Page 48): Because UNION and INTERSECTION are both commutative, the rule for the result is given only for the V first case. Where all components are V, then the normal rules of ITU-T Rec. X.680 | ISO/IEC 8824-1 apply, and these are not discussed further here. The cases where all components are I always give I, and are again not listed. The rules are:
V UNION I => I
V INTERSECTION I => V
-- The resulting V is just the V part of the intersection V EXCEPT I => V
-- The resulting V is just the V without the set difference I EXCEPT V => I
V, ..., I => I
I, ..., V => I
Following the arithmetic operation on the following example's constrains:
A13 ::= IA5String (SIZE(1..10, ...) ^ FROM("A".."D"))
yields:
SIZE(1..10, ...) ^ FROM("A".."D")
= { SIZE(1..10, ...), ALL } ^ { SIZE(MIN..MAX), FROM("A".."D") }
= { SIZE(1..10, ...) ^ SIZE(MIN..MAX), ALL ^ FROM("A".."D") }
= { SIZE(1..10), FROM("A".."D") }
But the example said that "A13 has an extensible effective size constraint of SIZE(1..10,...)", where did the extensible come about?
Any help is much appreciated.
The example you use is the equivalent of "V INTERSECTION V => V" since each of the constraints which are part of the intersection are PER visible constraints. It should be no surprise that the result of the intersection is also PER visible. Note that applying an INTERSECTION does not eliminate extensibility. Rather than just looking at the informative examples in Annex B, you should look at the normative rules in section 10.3 of X.691 which explicitly defines the rules for PER visible constraints. In particular, 10.3.9 indicates all size constraints are PER visible, and 10.3.21 indicates that when PER visible constraints that are part of an INTERSECTION, the resulting constraint is PER visible.
Related
I am dealing with a directed weighted graph and have a question about how to initialize a set a defined in the following:
Assume that the graph has the following nodes, which are subdivided into three different subsets.
//Subsets of Nodes
{int} Subset1= {44,99};
{int} Subset2={123456,123457,123458};
{int} Subset3={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
{int} Nodes=Subset1 union Subset2 union Subset3;
Now there is a set of H_j arcs, where j is in Nodes. H_j gives all arcs outgoing from Node j.
The arcs are stored in an excel file with the following structure:enter image description here
For node 44 in Nodes (Subset 1), there are the arcs <44,123456>, <44,123457>, <44,123458>. For 66 in Nodes (Subset 2), there is no arc. Can somebody help me how to implement this?
Important is that the code uses the input from the excel because in my real case there will be too much data to make a manual input... :(
Maybe there is a real easy solution for that. I would be very thankful!
Thank you so much in advance!
enter image description here
This addition refers to the answer from #Alex Fleischer:
Your code seems to work also in the overall context.
I am trying to implement the following constraints within a Maximization optimization ( The formulations (j,99) and (j,i) in the lower sum boundaries represent arcs):
enter image description here
I tried to implement it like this:
{int} TEST= {99};
subject to {
sum(m in M, j in a[m])x[<44,j>]==3;
sum(j in destPerOrig[99], t in TEST)x[<j,t>]==3;
forall(i in Nodes_wo_Subset1)
sum(j in destPerOrig[i],i in destPerOrig[i])x[<j,i>]==1;
}
M is a set of trains and a[M] gives a specific cost value for each indiviudal train. CPLEX shows 33 failure messages.
The most frequent one is that it cannot extract x[<j,i>], sum(in in destPerOrig[i]), sum(j in destPerOrig[i] and that x and destPerOrig are outside of the valid area.
Most probably the problem is that I implement the constraints in the wrong manner. Again, it is a directed graph.
Referring to the mathematical formulation in the screenshot: Could the format of destPerOrig[i] be a problem?
At the moment destPerOrig[44] gives {2 3 4}. But should´t it give:
{<44 2> <44 3> <44 4>} to work within the mathematical formulation?
I hope that this is enoug information for you to help me :(
I would be very thankful!
all arcs outgoing from Node j.
How to do this depends on how you store the adjacencies of the graph.
Perhaps you store a vector of arcs:
LOOP over arcs
IF arc source == node J
ADD to output
.mod
tuple arcE
{
string o;
string d;
}
{arcE} arcsInExcel=...;
{int} orig={ intValue(a.o) | a in arcsInExcel};
{int} destPerOrig[o in orig]={intValue(a.d) | a in arcsInExcel : intValue(a.o)==o && a.d!="" };
execute
{
writeln(orig);
writeln("==>");
writeln(destPerOrig);
}
/*
which gives
{44 66}
==>
[{2 3 4} {}]
*/
https://github.com/AlexFleischerParis/oplexcel/blob/main/readarcs.mod
.dat
SheetConnection s("readarcs.xlsx");
arcsInExcel from SheetRead(s,"A2:B5");
https://github.com/AlexFleischerParis/oplexcel/blob/main/readarcs.dat
I encountered this competitive programming problem:
nums is a vector of integers (length n)
ops is a vector of strings containing + and - (length n-1)
It can be solved with the reduce operation in Kotlin like this:
val op_iter = ops.iterator();
nums.reduce {a, b ->
when (op_iter.next()) {
"+" -> a+b
"-" -> a-b
else -> throw Exception()
}
}
reduce is described as:
Accumulates value starting with the first element and applying operation from left to right to current accumulator value and each element.
It looks like Rust vectors do not have a reduce method. How would you achieve this task?
Edited: since Rust version 1.51.0, this function is called reduce
Be aware of similar function which is called fold. The difference is that reduce will produce None if iterator is empty while fold accepts accumulator and will produce accumulator's value if iterator is empty.
Outdated answer is left to capture the history of this function debating how to name it:
There is no reduce in Rust 1.48. In many cases you can simulate it with fold but be aware that the semantics of the these functions are different. If the iterator is empty, fold will return the initial value whereas reduce returns None. If you want to perform multiplication operation on all elements, for example, getting result 1 for empty set is not too logical.
Rust does have a fold_first function which is equivalent to Kotlin's reduce, but it is not stable yet. The main discussion is about naming it. It is a safe bet to use it if you are ok with nightly Rust because there is little chance the function will be removed. In the worst case, the name will be changed. If you need stable Rust, then use fold if you are Ok with an illogical result for empty sets. If not, then you'll have to implement it, or find a crate such as reduce.
Kotlin's reduce takes the first item of the iterator for the starting point while Rust's fold and try_fold let you specify a custom starting point.
Here is an equivalent of the Kotlin code:
let mut it = nums.iter().cloned();
let start = it.next().unwrap();
it.zip(ops.iter()).try_fold(start, |a, (b, op)| match op {
'+' => Ok(a + b),
'-' => Ok(a - b),
_ => Err(()),
})
Playground
Or since we're starting from a vector, which can be indexed:
nums[1..]
.iter()
.zip(ops.iter())
.try_fold(nums[0], |a, (b, op)| match op {
'+' => Ok(a + b),
'-' => Ok(a - b),
_ => Err(()),
});
Playground
For a list, you can do pattern matching and iterate until the nth element, but for a tuple, how would you grab the nth element?
TL;DR; Stop trying to access directly the n-th element of a t-uple and use a record or an array as they allow random access.
You can grab the n-th element by unpacking the t-uple with value deconstruction, either by a let construct, a match construct or a function definition:
let ivuple = (5, 2, 1, 1)
let squared_sum_let =
let (a,b,c,d) = ivuple in
a*a + b*b + c*c + d*d
let squared_sum_match =
match ivuple with (a,b,c,d) -> a*a + b*b + c*c + d*d
let squared_sum_fun (a,b,c,d) =
a*a + b*b + c*c + d*d
The match-construct has here no virtue over the let-construct, it is just included for the sake of completeness.
Do not use t-uples, Don¹
There are only a few cases where using t-uples to represent a type is the right thing to do. Most of the times, we pick a t-uple because we are too lazy to define a type and we should interpret the problem of accessing the n-th field of a t-uple or iterating over the fields of a t-uple as a serious signal that it is time to switch to a proper type.
There are two natural replacements to t-uples: records and arrays.
When to use records
We can see a record as a t-uple whose entries are labelled; as such, they are definitely the most natural replacement to t-uples if we want to access them directly.
type ivuple = {
a: int;
b: int;
c: int;
d: int;
}
We then access directly the field a of a value x of type ivuple by writing x.a. Note that records are easily copied with modifications, as in let y = { x with d = 0 }. There is no natural way to iterate over the fields of a record, mostly because a record do not need to be homogeneous.
When to use arrays
A large² homogeneous collection of values is adequately represented by an array, which allows direct access, iterating and folding. A possible inconvenience is that the size of an array is not part of its type, but for arrays of fixed size, this is easily circumvented by introducing a private type — or even an abstract type. I described an example of this technique in my answer to the question “OCaml compiler check for vector lengths”.
Note on float boxing
When using floats in t-uples, in records containing only floats and in arrays, these are unboxed. We should therefore not notice any performance modification when changing from one type to the other in our numeric computations.
¹ See the TeXbook.
² Large starts near 4.
Since the length of OCaml tuples is part of the type and hence known (and fixed) at compile time, you get the n-th item by straightforward pattern matching on the tuple. For the same reason, the problem of extracting the n-th element of an "arbitrary-length tuple" cannot occur in practice - such a "tuple" cannot be expressed in OCaml's type system.
You might still not want to write out a pattern every time you need to project a tuple, and nothing prevents you from generating the functions get_1_1...get_i_j... that extract the i-th element from a j-tuple for any possible combination of i and j occuring in your code, e.g.
let get_1_1 (a) = a
let get_1_2 (a,_) = a
let get_2_2 (_,a) = a
let get_1_3 (a,_,_) = a
let get_2_3 (_,a,_) = a
...
Not necessarily pretty, but possible.
Note: Previously I had claimed that OCaml tuples can have at most length 255 and you can simply generate all possible tuple projections once and for all. As #Virgile pointed out in the comments, this is incorrect - tuples can be huge. This means that it is impractical to generate all possible tuple projection functions upfront, hence the restriction "occurring in your code" above.
It's not possible to write such a function in full generality in OCaml. One way to see this is to think about what type the function would have. There are two problems. First, each size of tuple is a different type. So you can't write a function that accesses elements of tuples of different sizes. The second problem is that different elements of a tuple can have different types. Lists don't have either of these problems, which is why you can have List.nth.
If you're willing to work with a fixed size tuple whose elements are all the same type, you can write a function as shown by #user2361830.
Update
If you really have collections of values of the same type that you want to access by index, you should probably be using an array.
here is a function wich return you the string of the ocaml function you need to do that ;) very helpful I use it frequently.
let tup len n =
if n>=0 && n<len then
let rec rep str nn = match nn<1 with
|true ->""
|_->str ^ (rep str (nn-1))in
let txt1 ="let t"^(string_of_int len)^"_"^(string_of_int n)^" tup = match tup with |" ^ (rep "_," n) ^ "a" and
txt2 =","^(rep "_," (len-n-2)) and
txt3 ="->a" in
if n = len-1 then
print_string (txt1^txt3)
else
print_string (txt1^txt2^"_"^txt3)
else raise (Failure "Error") ;;
For example:
tup 8 6;;
return:
let t8_6 tup = match tup with |_,_,_,_,_,_,a,_->a
and of course:
val t8_6 : 'a * 'b * 'c * 'd * 'e * 'f * 'g * 'h -> 'g = <fun>
I need to implement quicksort in SML for a homework assignment, and I'm lost. I was previously unfamiliar with how quicksort was implemented, so I read up on that, but every implementation I read about was in an imperative one. These don't look too difficult, but I had no idea how to implement quicksort functionally.
Wikipedia happens to have quicksort code in Standard ML (which is the language required for my assignment), but I don't understand how it's working.
Wikipedia code:
val filt = List.filter
fun quicksort << xs = let
fun qs [] = []
| qs [x] = [x]
| qs (p::xs) = let
val lessThanP = (fn x => << (x, p))
in
qs (filt lessThanP xs) # p :: (qs (filt (not o lessThanP) xs))
end
in
qs xs
end
In particular, I don't understand this line: qs (filt lessThanP xs) # p :: (qs (filt (not o lessThanP) xs)). filt will return a list of everything in xs less than p*, which is concatenated with p, which is cons-ed onto everything >= p.*
*assuming the << (x, p) function returns true when x < p. Of course it doesn't have to be that.
Actually, typing this out is helping me understand what's going on a bit. Anyways, I'm trying to compare that SML function to wiki's quicksort pseudocode, which follows.
function quicksort(array, 'left', 'right')
// If the list has 2 or more items
if 'left' < 'right'
// See "Choice of pivot" section below for possible choices
choose any 'pivotIndex' such that 'left' ≤ 'pivotIndex' ≤ 'right'
// Get lists of bigger and smaller items and final position of pivot
'pivotNewIndex' := partition(array, 'left', 'right', 'pivotIndex')
// Recursively sort elements smaller than the pivot
quicksort(array, 'left', 'pivotNewIndex' - 1)
// Recursively sort elements at least as big as the pivot
quicksort(array, 'pivotNewIndex' + 1, 'right')
Where partition is defined as
// left is the index of the leftmost element of the array
// right is the index of the rightmost element of the array (inclusive)
// number of elements in subarray = right-left+1
function partition(array, 'left', 'right', 'pivotIndex')
'pivotValue' := array['pivotIndex']
swap array['pivotIndex'] and array['right'] // Move pivot to end
'storeIndex' := 'left'
for 'i' from 'left' to 'right' - 1 // left ≤ i < right
if array['i'] < 'pivotValue'
swap array['i'] and array['storeIndex']
'storeIndex' := 'storeIndex' + 1
swap array['storeIndex'] and array['right'] // Move pivot to its final place
return 'storeIndex'
So, where exactly is the partitioning happening? Or am I thinking about SMLs quicksort wrongly?
So, where exactly is the partitioning happening? Or am I thinking about SMLs quicksort wrongly?
A purely functional implementation of quicksort works by structural recursion on the input list (IMO, this is worth mentioning). Moreover, as you see, the two calls to "filt" allow you to partition the input list into two sublists (say A and B), which then can be treated individually. What is important here is that:
all elements of A are less than or equal to the pivot element ("p" in the code)
all elements of B are greater than the pivot element
An imperative implementation works in-place, by swapping elements in the same array. In the pseudocode you've provided, the post-invariant of the "partition" function is that you have two subarrays, one starting at 'left' of input array (and ending at 'pivotIndex'), and another starting right after 'pivotIndex' and ending at 'right'. What is important here is that the two subarrays can be seen as representations of the sublists A and B.
I think that by now, you have the idea where the partitioning step is happening (or conversely, how the imperative and functional are related).
You said this:
filt will return a list of everything in xs less than p*, which is concatenated with p, which is cons-ed onto everything >= p.*
That's not quite accurate. filt will return a list of everything in xs less than p, but that new list isn't immediately concatenated with p. The new list is in fact passed to qs (recursively), and whatever qs returns is concatenated with p.
In the pseudocode version, the partitioning happens in-place in the array variable. That's why you see swap in the partition loop. Doing the partitioning in-place is much better for performance than making a copy.
I've been experimenting with genetic algorithms as of late and now I'd like to build mathematical expressions out of the genomes (For easy talk, its to find an expression that matches a certain outcome).
I have genomes consisting of genes which are represented by bytes, One genome can look like this: {12, 127, 82, 35, 95, 223, 85, 4, 213, 228}. The length is predefined (although it must fall in a certain range), neither is the form it takes. That is, any entry can take any byte value.
Now the trick is to translate this to mathematical expressions. It's fairly easy to determine basic expressions, for example: Pick the first 2 values and treat them as products, pick the 3rd value and pick it as an operator ( +, -, *, /, ^ , mod ), pick the 4th value as a product and pick the 5th value as an operator again working over the result of the 3rd operator over the first 2 products. (or just handle it as an postfix expression)
The complexity rises when you start allowing priority rules. Now when for example the entry under index 2 represents a '(', your bound to have a ')' somewhere further on except for entry 3, but not necessarily entry 4
Of course the same goes for many things, you can't end up with an operator at the end, you can't end up with a loose number etc.
Now i can make a HUGE switch statement (for example) taking in all the possible possibilities but this will make the code unreadable. I was hoping if someone out there knows a good strategy of how to take this one on.
Thanks in advance!
** EDIT **
On request: The goal I'm trying to achieve is to make an application which can resolve a function for a set of numbers. As for the example I've given in the comment below: {4, 11, 30} and it might come up with the function (X ^ 3) + X
Belisarius in a comment gave a link to an identical topic: Algorithm for permutations of operators and operands
My code:
private static double ResolveExpression(byte[] genes, double valueForX)
{
// folowing: https://stackoverflow.com/questions/3947937/algorithm-for-permutations-of-operators-and-operands/3948113#3948113
Stack<double> operandStack = new Stack<double>();
for (int index = 0; index < genes.Length; index++)
{
int genesLeft = genes.Length - index;
byte gene = genes[index];
bool createOperand;
// only when there are enough possbile operators left, possibly add operands
if (genesLeft > operandStack.Count)
{
// only when there are at least 2 operands on the stack
if (operandStack.Count >= 2)
{
// randomly determine wether to create an operand by threating everything below 127 as an operand and the rest as an operator (better then / 2 due to 0 values)
createOperand = gene < byte.MaxValue / 2;
}
else
{
// else we need an operand for sure since an operator is illigal
createOperand = true;
}
}
else
{
// false for sure since there are 2 many operands to complete otherwise
createOperand = false;
}
if (createOperand)
{
operandStack.Push(GeneToOperand(gene, valueForX));
}
else
{
double left = operandStack.Pop();
double right = operandStack.Pop();
double result = PerformOperator(gene, left, right);
operandStack.Push(result);
}
}
// should be 1 operand left on the stack which is the ending result
return operandStack.Pop();
}
private static double PerformOperator(byte gene, double left, double right)
{
// There are 5 options currently supported, namely: +, -, *, /, ^ and log (math)
int code = gene % 6;
switch (code)
{
case 0:
return left + right;
case 1:
return left - right;
case 2:
return left * right;
case 3:
return left / right;
case 4:
return Math.Pow(left, right);
case 5:
return Math.Log(left, right);
default:
throw new InvalidOperationException("Impossible state");
}
}
private static double GeneToOperand(byte gene, double valueForX)
{
// We only support numbers 0 - 9 and X
int code = gene % 11; // Get a value between 0 and 10
if (code == 10)
{
// 10 is a placeholder for x
return valueForX;
}
else
{
return code;
}
}
#endregion // Helpers
}
Use "post-fix" notation. That handles priorities very nicely.
Post-fix notation handles the "grouping" or "priority rules" trivially.
For example, the expression b**2-4*a*c, in post-fix is
b, 2, **, 4, a, *, c, *, -
To evaluate a post-fix expression, you simply push the values onto a stack and execute the operations.
So the above becomes something approximately like the following.
stack.push( b )
stack.push( 2 )
x, y = stack.pop(), stack.pop(); stack.push( y ** x )
stack.push( 4 )
stack.push( a )
x, y = stack.pop(), stack.pop(); stack.push( y * x )
stack.push( c )
x, y = stack.pop(), stack.pop(); stack.push( y * x )
x, y = stack.pop(), stack.pop(); stack.push( y - x )
To make this work, you need to have to partition your string of bytes into values and operators. You also need to check the "arity" of all your operators to be sure that the number of operators and the number of operands balances out. In this case, the number of binary operators + 1 is the number of operands. Unary operators don't require extra operands.
As ever with GA a large part of the solution is choosing a good representation. RPN (or post-fix) has already been suggested. One concern you still have is that your GA might throw up expressions which begin with operators (or mismatch operators and operands elsewhere) such as:
+,-,3,*,4,2,5,+,-
A (small) part of the solution would be to define evaluations for operand-less operators. For example one might decide that the sequence:
+
evaluates to 0, which is the identity element for addition. Naturally
*
would evaluate to 1. Mathematics may not have figured out what the identity element for division is, but APL has.
Now you have the basis of an approach which doesn't care if you get the right sequence of operators and operands, but you still have a problem when you have too many operands for the number of operators. That is, what is the intepretation of (postfix following) ?
2,4,5,+,3,4,-
which (possibly) evaluates to
2,9,-1
Well, now you have to invent your own convention if you want to reduce this to a single value. But you could adopt the convention that the GA has created a vector-valued function.
EDIT: response to OP's comment ...
If a byte can represent either an operator or an operand, and if your program places no restrictions on where a genome can be split for reproduction, then there will always be a risk that the offspring represents an invalid sequence of operators and operands. Consider, instead of having each byte encode either an operator or an operand, a byte could encode an operator+operand pair (you might run out of bytes quickly so perhaps you'd need to use two bytes). Then a sequence of bytes might be translated to something like:
(plus 1)(plus x)(power 2)(times 3)
which could evaluate, following a left-to-right rule with a meaningful interpretation for the first term, to 3((x+1)^2)