Implement the min Priority_queue using vector<char, int> - vector

I have to implement the minimum priority_queue using vector<char, int>.
I am confused with the following code snippet.
priority_queue(int , vector<char, int> , greater<int> pq;
But this is absolutely wrong.

This should work :
priority_queue<int,vector< pair<char, int> >, greater< pair<char,int> > > PQ;

Related

std::unique_ptr and QObject::deleteLater

I would like my std::unique_ptr to call QObject::deleteLater to destruct the object.
I can't figure out how to do it.
Nothing I tried compiles.
E.g.
std::unique_ptr<SomeQObject, decltype(&QObject::deleteLater)> var(
pointer, &QObject::deleteLater);
Please help...
Addition #1.
OK, I've found that this works:
std::unique_ptr<QObject, decltype(std::mem_fun(&QObject::deleteLater))> var(
pointer,
std::mem_fun(&QObject::deleteLater));
Instead of this one:
std::unique_ptr<QObject, decltype(&QObject::deleteLater)> var(
pointer,
QObject::deleteLater);
But it's too ugly for me to use it. Is there a good way?
It's very very simple and straightforward, by the way.
struct QObjectDeleteLater {
void operator()(QObject *o) {
o->deleteLater();
}
};
template<typename T>
using qobject_delete_later_unique_ptr = std::unique_ptr<T, QObjectDeleteLater>;
Usage:
qobject_delete_later_unique_ptr<QObject> ptr(new QFooBar);
Bonus points if you can come up with a sensible name...
As doc says it :
Type requirements
-Deleter must be FunctionObject or lvalue reference to a FunctionObject or lvalue reference to function, callable with an
argument of type unique_ptr::pointer
You are 'stuck' here with std::bind, std::mem_fun or lambda, you can't just use member func pointer in this context because its not satisfying requirements
lambda version:
auto deleter = [](QObject* obj) {obj->deleteLater();};
std::unique_ptr<QObject, decltype(deleter)> x(new QObject(), deleter);

UnsafeMutablePointer<Int8> from String in Swift

I'm using the dgeev algorithm from the LAPACK implementation in the Accelerate framework to calculate eigenvectors and eigenvalues of a matrix. Sadly the LAPACK functions are not described in the Apple Documentation with a mere link to http://netlib.org/lapack/faq.html included.
If you look it up, you will find that the first two arguments in dgeev are characters signifying whether to calculate eigenvectors or not. In Swift, it is asking for UnsafeMutablePointer<Int8>. When I simply use "N", I get an error. The dgeev function and the error are described in the following screenshot
What should I do to solve this?
The "problem" is that the first two parameters are declared as char *
and not as const char *, even if the strings are not modified by the function:
int dgeev_(char *__jobvl, char *__jobvr, ...);
is mapped to Swift as
func dgeev_(__jobvl: UnsafeMutablePointer<Int8>, __jobvr: UnsafeMutablePointer<Int8>, ...) -> Int32;
A possible workaround is
let result = "N".withCString {
dgeev_(UnsafeMutablePointer($0), UnsafeMutablePointer($0), &N, ...)
}
Inside the block, $0 is a pointer to a NUL-terminated array of char with the
UTF-8 representation of the string.
Remark: dgeev_() does not modify the strings pointed to by the first two arguments,
so it "should be" declared as
int dgeev_(const char *__jobvl, const char *__jobvr, ...);
which would be mapped to Swift as
func dgeev_(__jobvl: UnsafePointer<Int8>, __jobvr: UnsafePointer<Int8>, ...) -> Int32;
and in that case you could simply call it as
let result = dgeev_("N", "N", &N, ...)
because Swift strings are converted to UnsafePointer<Int8>) automatically,
as explained in String value to UnsafePointer<UInt8> function parameter behavior.
It is ugly, but you can use:
let unsafePointerOfN = ("N" as NSString).UTF8String
var unsafeMutablePointerOfN: UnsafeMutablePointer<Int8> = UnsafeMutablePointer(unsafePointerOfN)
and use unsafeMutablePointerOfN as a parameter instead of "N".
With Swift 4.2 and 5 you can use this similar approach
let str = "string"
let unsafePointer = UnsafeMutablePointer<Int8>(mutating: (str as NSString).utf8String)
You can get the result from unsafePointer.

What is this NSErrorPointer type?

In Swift the ampersand sign & is for inout parameters. Like
var value = 3
func inoutfunc(inout onlyPara: Int) {
onlyPara++
}
inoutfunc(&value) // value is now 4
That doesn't look like onlyPara is a pointer, maybe it is and get dereferences immediately when using it inside the function.
Is onlyPara a pointer?
When I don't need a IntPointer type, why are the framework methods using a NSErrorPointer type? Because they can't change the methods because of existing Objective-C code?
But why is then Swift converting &error to NSErrorPointer, is that autoboxed?
var errorPtr: NSErrorPointer = &error
And when I have a NSErrorPointer. How do I dereference it?
var error: NSError = *errorPtr // won't work
Maybe someone can enlighten me. Using only Swift is easy. I think the questions are one chunk of knowledge over & between swift and Objective-C (as the address of operator)
Solution to 4. I found out how to dereference it:
var error: NSError = errorPtr.memory!
I suggest you read the Pointers section of the Using Swift with Cocoa and Objective-C guide: https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/BuildingCocoaApps/InteractingWithCAPIs.html#//apple_ref/doc/uid/TP40014216-CH8-XID_16
There is a table at the bottom of the Pointers section, which explains how class pointers bridge to Swift pointer types. Based on that, the NSError pointer should be AutoreleasingUnsafePointer<NSError>. Searching trough the headers for NSErrorPointer yields this:
typealias NSErrorPointer = AutoreleasingUnsafePointer<NSError?>
Why the extra ? after NSError? I guess it's because NSError can also be nil.
Hope it helps!
Swift 3
var errorPtr: NSErrorPointer = nil
callSomeFunctionThatReceivesParamByReference(.., error: errorPtr)
if let error = errorPtr?.pointee { ... }

How to initialize a bit vector in VHDL

I want to have a bit vector, I want it to have a value 2. I tried many things but I always get an error. Now I have this :
variable var : bit_vector := B"00000000000000000000000000000100";
I get these errors :
can't match integer literal with type array type "bit_vector"
and
declaration of variable "var" with unconstrained array type
"bit_vector" is not allowed
How can I fix this? Thanks.
You must give var a range (constrain it), like:
variable var : bit_vector(31 downto 0) ...
Then you can assign a constant to it for example with:
library ieee;
use ieee.numeric_bit_unsigned.all;
...
variable var : bit_vector(31 downto 0) := to_bit_vector(2, 32);
or with initial value given as bit string like:
variable var : bit_vector(31 downto 0) := "00000000000000000000000000000010";
The use of to_bit_vector is less error prone, as you can see, since the constant in your example code is not 2 but actually 4 ;-)
A more objective, generic assignment is the following:
var := (1=>'1', others=>'0');
Adding to #MortenZilmer's answer, please note that:
The compiler doesn't have a problem with your original code if you declare a constant instead of a variable:
constant const: bit_vector := b"00000000000000000000000000000100"; -- works OK
You can make the bitstring literal more readable if you write it in base 10. You can also specify the number of bits of the bitstring literal, as per the example below:
variable var: bit_vector(31 downto 0) := 32d"2";

Xtext typesystem multidimensional arrays support

I am currently writing a parser for a Domain Specific Language using Xtext. In order to check the validity of data types usage in expressions I also use Xtext typesystem Framework.
I want to include multidimensional arrays in my grammar, and I have done it as far as Xtext is concerned, but I have problems with the type system.
My grammar about arrays in Xtext is the following:
ArrayType:
{ArrayType}(basetype=BaseType (dim+=Dimensions)+)
;
BaseType:
PrimaryType | StructuredType
;
Dimensions:
{Dimensions}'['size=Expr ']'
;
An example of the above grammar is int[5] name; (well name actually isn't included in this part of the grammar).
Now, let's go to what I have done in the type system in order to declare ArrayTypes, according to this tutorial (type recursion features section).
public EObject type(ArrayType a_t, TypeCalculationTrace trace)
{
ArrayType arraytype= (ArrayType) Utils.create(lang.getArrayType());
EObject basetype = typeof(a_t.getBasetype(),trace);
arraytype.setBasetype((BaseType) basetype);
return arraytype;
}
declareTypeRecursionFeature(lang.getArrayType(), lang.getArrayType_Basetype());
So if I declare a variable int[5] k; the type returned is ArrayType(int).
What I want to do and I cant, is to include in the type the number of the array's dimensions. For example,
int[3][2] k; //this should be of type ArrayType(int)[][]
int[5] g; //this should be of type ArrayType(int)[]
k[3][1]=3; //must be right
k[3]=g; //must be right
k=g; //must be wrong
I am really sorry for the long message, but I dont know how else I could better explain that to you. I will appreciate any ideas!
Thank you in advance!
Kate

Resources