Currently if I want to generate an identical list to a previously generated one in Specman e I use:
<'
struct A {
ListA : list of uint;
keep ListA.size() == 5;
keep ListA.sum(it) <= 20;
};
struct B {
ListB : list of uint;
};
extend sys {
A1 : A;
B1 : B;
// Keeps
keep B1.B.size() == read_only(A1.A.size());
keep for each in B1.B {
it == read_only(A1.A[index]);
};
};
'>
Is there a cleaner way to have this generation? A one line keep?
You could say:
keep B1.ListB == A1.ListA.copy();
But using the generator to create an exact copy is very inefficient. In the end, there is nothing to generate...
Instead, use a simple assignment.
extend sys {
...
post_generate is also {
B1.B = A1.A.copy(); // shallow copy OK for uints, use deep_copy() when appropriate
}
}
Depending on other members, you might not even have to generate B1 at all.
Related
How will Kotlin work the following code?
Will a collection of 5000000 integers be created as temporary collection or will the filter feed its result immediately to the forEach which means, that only 20 integers will be looked at?
If not, how would I be able to avoid the intermediate collection?
Code:
class Tests {
#Test
fun test() {
var counter = 0
(1..10_000_000).filter { it % 2 == 1 }.forEach {
counter++
if (counter > 10)
return
}
}
}
Your code sample uses operations on Iterable<T>, which works eagerly: the .filter { ... } call will process the whole range and produce a list storing intermediate results.
To alter that, consider using Sequence<T> (e.g. with .asSequence()) that works lazily, so that the intermediate operations such as .filter { ... } produce another lazy sequence and only do the work when items are queried by terminal operations such as .forEach { ... }:
(1..10000000).asSequence()
.filter { it % 2 == 1 }
.forEach { /* ... */ }
See: Kotlin's Iterable and Sequence look exactly same. Why are two types required?
You can actually see the answer to your question pretty quickly by just adding println(it) into filter:
//...
.filter {
println(it)
it % 2 == 1
}
//...
You'll see every number printed. That's because your processing works eagerly as explained here.
As already suggested, lazy Sequences to the rescue: (1..10_000_000).asSequence()
Now, the println(it) in filter will only print the numbers 1..21, which is definetly preferable in your example.
I have my_list that defined this way:
struct my_struct {
comparator[2] : list of int(bits:16);
something_else[2] : list of uint(bits:16);
};
...
my_list[10] : list of my_struct;
It is forbidden to comparators at the same index (0 or 1) to be the same in all the list. When I constrain it this way (e.g. for index 0):
keep my_list.all_different(it.comparator[0]);
I get compilation error:
*** Error: GEN_NO_GENERATABLE_NOTIF:
Constraint without any generatable element.
...
keep my_list.all_different(it.comparator[0]);
How can I generate them all different? Appreciate any help
It also works in one go:
keep for each (elem) in my_list {
elem.comparator[0] not in my_list[0..max(0, index-1)].apply(.comparator[0]);
elem.comparator[1] not in my_list[0..max(0, index-1)].apply(.comparator[1]);
};
When you reference my_list.comparator it doesn't do what you think it does. What happens is that it concatenates all comparator lists into one bit 20 element list. Try it out by removing your constraint and printing it:
extend sys {
my_list[10] : list of my_struct;
run() is also {
print my_list.comparator;
};
};
What you can do in this case is construct your own list of comparator[0] elements:
extend sys {
comparators0 : list of int;
keep comparators0.size() == my_list.size();
keep for each (comp) in comparators0 {
comp == my_list.comparator[index * 2];
};
keep comparators0.all_different(it);
// just to make sure that we've sliced the appropriate elements
run() is also {
print my_list[0].comparator[0], comparators0[0];
print my_list[1].comparator[0], comparators0[1];
print my_list[2].comparator[0], comparators0[2];
};
};
You can apply an all_different() constraint on this new list. To make sure it's working, adding the following constraint should cause a contradiction:
extend sys {
// this constraint should cause a contradiction
keep my_list[0].comparator[0] == my_list[1].comparator[0];
};
I've written the following code for a heap of Node*s, which are found in module node:
import std.exception, std.container;
public import node;
alias NodeArray = Array!(const (Node)*);
alias NodeHeap = BinaryHeap!(NodeArray, cmp_node_ptr);
auto make_heap() {
return new NodeHeap(NodeArray(cast(const(Node)*)[]));
}
void insert(NodeHeap* heap, in Node* u) {
enforce(heap && u);
heap.insert(u);
}
pure bool cmp_node_ptr(in Node* a, in Node* b) {
enforce(a && b);
return (a.val > b.val);
}
I then tried running the following unit tests on it, where make_leaf returns a Node* initialized with the argument given:
unittest {
auto u = make_leaf(10);
auto heap = make_heap();
insert(heap, u); //bad things happen here
assert(heap.front == u);
auto v = make_leaf(20);
insert(heap, v);
assert(heap.front == u); //assures heap property
}
The tests make it to the line I comment-marked, and then throw an enforcement error on the line enforce(a && b) in cmp_node_ptr. I'm totally lost as to why this is happening.
you are doing wrong thing in this operator:
NodeArray(cast(const(Node)*)[])
you obviously want to create empty NodeArray, but what you really got is NodeArray with one null item. NodeArray constructor takes list of values for new array as arguments, and you passing one "empty array" (which is essentially null), thus creating NodeArray with one null element.
the correct way is just:
NodeArray()
i.e.:
auto make_heap() {
return new NodeHeap();
}
make this change and everything will be fine.
p.s. it seems that D notation for multiple arguments of type U (U[] values...) made you think that constructor accepts another array as initialiser.
p.p.s. sorry, fixed make_heap() code: accidentally forgot to write "NodeArray()" in it. and edited it again, as empty NodeArray() call is not necessary there. double fault!
In an object, I have an array of const-handles to some object of another specific class. In a method, I may want to return one of this handles as an inout-parameter. Here as a simplified example:
class A {}
class B {
const(A) a[];
this() {
a = [new A(), new A(), new A()];
}
void assign_const(const(A)* value) const {
// *value = a[0]; // fails with: Error: cannot modify const expression *value
}
}
void main() {
const(A) a;
B b = new B();
b.assign_const(&a);
assert(a == b.a[0]); // fails .. obviously
}
I do not want to remove the const in the original array. Class B is meant as some kind of view onto a collection constant A-items. I'm new to D coming from C++. Do I have messed up with const-correctness in the D-way? I've tried several ways to get this to work but have no clue how to get it right.
How is the correct way to perform this lookup without "evil" casting?
Casting away const and modifying an element is undefined behavior in D. Don't do it. Once something is const, it's const. If the element of an array is const, then it can't be changed. So, if you have const(A)[], then you can append elements to the array (since it's the elements that are const, not the array itself), but you can't alter any of the elements in the array. It's the same with immutable. For instance, string is an alias for immutable(char)[], which is why you can append to a string, but you can't alter any of its elements.
If you want an array of const objects where you can alter the elements in the array, you need another level of indirection. In the case of structs, you could use a pointer:
const(S)*[] arr;
but that won't work with classes, because if C is a class, then C* points to a reference to a class object, not to the object itself. For classes, you need to do
Rebindable!(const C) arr;
Rebindable is in std.typecons.
I have to copy certain elements from a std::map into a vector.
It should work like in this loop:
typedef int First;
typedef void* Second;
std::map<First, Second> map;
// fill map
std::vector<Second> mVec;
for (std::map<First, Second>::const_iterator it = map.begin(); it != map.end(); ++it) {
if (it->first % 2 == 0) {
mVec.push_back (it->second);
}
}
Since I'd like to avoid using any functors, but use boost::lambda instead, I tried using std::copy, but can't get it right.
std::copy (map.begin(), map.end(), std::back_inserter(mVec)
bind(&std::map<int, void*>::value_type::first, _1) % 2 == 0);
I'm new to lambda expressions, and I can't figure it out how to use them correctly.
I didn't get any useful results on Google or StackOverflow either.
This question is similar
What you would need in STL would be a transform_if algorithm. Then you would have to write:
transform_if (mymap.begin(), mymap.end(),
back_inserter(myvec),
bind(&std::map<First, Second>::value_type::second, _1) ,
(bind(&std::map<First, Second>::value_type::first, _1) % 2) == 0 );
The code for transform_if is taken from this unrelated question and it is:
template<class InputIterator, class OutputIterator, class UnaryFunction, class Predicate>
OutputIterator transform_if(InputIterator first,
InputIterator last,
OutputIterator result,
UnaryFunction f,
Predicate pred)
{
for (; first != last; ++first)
{
if( pred(*first) )
*result++ = f(*first);
}
return result;
}
I think there is no other way to perform both steps (transform and conditional copy) at once using STL algorithms.
You can use boost range adaptors to achieve that.
using namespace boost::adaptors;
boost::copy( map | filtered( [] (const pair<First,Second> &p)->bool {return p.first % 2 == 0;})
| transformed( [] (const pair<First,Second> &p) {return p.second;}),
std::back_inserter(mVec));