Making a reference-counted object in D using RefCounted!(T) - reference-counting

How do you use std.typecons.RefCounted!(T) to make a reference-counted object in D?
I've tried to figure out what std.array.Array does internally by looking at the source, but while I can read the source, I just can't figure what a "payload" is or how it all works when there's things like bitwise struct copying involved, as well as why some things are duplicated in the internal and external structure.
Could anyone provide an example or a link on how to use it to, say, wrap a simple Win32 HANDLE?
Thanks!

Disclaimer: I haven't tested my claims, just read the documentation.
Payload is referring to what is being stored. In your case the payload is the Win32 HANDLE. Since HANDLE is just an integer you wouldn't want to do:
auto refHandle = RefCounted!HANDLE(WhatGetsMeAHandle());
Because a Windows function will need to be called when the handle goes out of scope.
In std.containers.Array what you saw was a struct called Payload, which had a field called _payload. The structure is going to be the storage of the data, accessed through _payload. This provides a level of indirection to be utilized later.
You will notice that RefCounted is actually used on the Array structure. This means the destructor for that struct will only be called when the reference count is 0. So the ~this() inside of Payload is where you would want to clean up the your HANDLE.
What is happening: since struct is a value type, every time the structure goes out of scope the destructor is called, there isn't one for Array, but Payload is wrapped in a RefCounted, the destructor for RefCounted!Payload is also called. And only when the reference count reaches zero is the destructor for Payload itself called.
Now, RefCounted itself has reference semantics, this means that having an Array a, you can then assign to auto b = a; and everything will be copied over, but RefCounted has a postblits defined meaning the data will not be copied, but the reference count will be
incremented.
I will now try and provide you with a wrapper outline for what you want. It will probably help you visualize the information above, but it may not be entirely correct. Let me know if something needs fixing.
struct MyWinWrapper {
struct Payload {
HANDLE _payload;
this(HANDLE h) { _payload = h; }
~this() { freeHandleHere(_payload); }
// Should never perform these operations
this(this) { assert(false); }
void opAssign(MyWinWrapper.Payload rhs) { assert(false); }
}
private alias RefCounted!(Payload, RefCountedAutoInitialize.no) Data;
private Data _data;
this(HANDLE h) { _data = Data(h); }
}
Since there is no default constructor for a struct you will probably want to provide a free function that returns this structure.

Related

How can I retrieve struct from nested array in solidity?

I want to grab my User out of the nested gameBoard array so that I can move it to a new index set of x and y. Remix IDE throws this error: TypeError: type struct Game.User storage ref[] storage ref is not implicitly convertible to expected type struct Game.User memory. I originally tried this without the memory, but not only does that go against the goal of not storing it permanently (if I understand it correctly), but also it threw less useful errors. Please help!
pragma solidity ^0.4.0;
contract Game {
struct User{
address owner;
uint currency;
uint left;
uint right;
uint top;
uint bottom;
}
User[][10][10] public gameBoard;
function addUser (uint _x, uint _y) public {
gameBoard[_x][_y].push(User(msg.sender, 10, 5, 5, 5, 5));
}
function moveUser (uint _fromX, uint _fromY, uint _toX, uint _toY) public {
User memory mover = gameBoard[_fromX][_fromY];
if (mover.owner != msg.sender)return;
// once I have 'mover', I will check whether
// I want its the msg.senders and then place
// it where I want it to go
}
}
Short Answer: You are indexing your array wrong, and need another set of [brackets].
So you created a 3 dimensional array of users called "gameboard". When adding a user you push a struct into the dynamic 3rd dimension of your array correctly. However when you access the structs, you only give two dimensions and so Solidity returns the dynamic user array. Since you are trying to store it into a struct rather than an array of structs the error is being thrown. The easiest way to fix it is to use:
User memory mover = gameBoard[_fromX][_fromY][0];
However this only returns the first user at that position on the game board so you'll probably need to do some sort of looping (which isn't ideal in contracts). Personally I prefer to stay away from multidimensional arrays, and honestly all arrays in general (although they have their uses) when working with Solidity. Mappings are usually a lot easier to work with, especially when working with addresses. Could you possibly elaborate on what you are attempting to do in case there is a better way of achieving it?

Integrating smart pointers with legacy code raw pointers

I have a situation, where I have existing code that works with raw pointers, and I'm not permitted to smart-pointer-ify it. However, I am permitted to use smart pointers in any new code I develop.
For example.
I have an existing function like:
void processContent()
{
ContentObject * myContent = new ContentObject();
newFunction(myContent);
}
void newFunction(ContentObject * content)
{
// myVector is just a std::vector<ContentObject*>, defined elsewhere
myVector.push_back(content);
}
void doSomethingWithContent()
{
// There is some logic here, but ultimately based on this logic I want to remove entries, and free the memory they point to.
myVector.pop_back();
}
I have control over the content of "newFunction" and "doSomethingWithContent". But the argument passed into newFunction is fixed. Obviously I could manually delete the pointer in myVetor, before popping it, but I wondered if I can implement smart pointers here so that it happens "automatically" for me?
Can I take a raw pointer passed into a function, and turn it into a unique_ptr, then add this to a container, and have it delete the memory when it's popped from the container?
Thanks
Joey
Assume that you can define your myVector as the following:
std::vector<std::shared_ptr<ContentObject>> myVector;
In that case you can switch on smart pointers in your code and myVector will keep all your objects as you expected:
void newFunction(ContentObject * content)
{
myVector.push_back(std::shared_ptr<ContentObject>(content));
}
void doSomethingWithContent()
{
// There is some logic here, but ultimately based on this logic I want to remove entries, and free the memory they point to.
myVector.pop_back();
}

When and how to use a vector of references

This code correctly compiles. It has a few unused code warnings, but that's okay for now.
use std::collections::BTreeMap;
enum Object<'a> {
Str(String),
Int(i32),
Float(f32),
Vector(Vec<&'a Object<'a>>),
Prim(fn(State) -> State)
}
struct State<'a> {
named: BTreeMap<String, &'a Object<'a>>,
stack: Vec<Object<'a>>
}
impl<'a> State<'a> {
fn push_int(&mut self, x: i32) {
self.stack.push(Object::Int(x));
}
}
fn main() {
println!("Hello, world!");
let obj = Object::Str("this is a test".to_string());
}
The important part of this code is push_int and stack: Vec<Object<'a>>.
I'm sort of trying to make a stack-based VM.
I want to pass the state to functions, which can take stuff off the stack, manipulate the stuff, and then put some stuff back on the stack; the named field is going to hold named objects.
I have a hunch that it would be better to have the stack represented as a Vec<&'a Object<'a>> instead.
The way I have it now, I fear I'm committing some inefficiency error. Is my hunch correct?
The second part of the problem is that I don't know how to get the vector of references version to work. Creating new value with the right lifetimes to push onto the stack is not working for me.
I'm a bit vague about this issue, so if I've been unclear, ask me questions to clear stuff up.
The reason you could not get it to work is that structs cannot have fields that refer to other fields. (See supporting links at the bottom.)
What you can do, is put all the Objects into your Vec, and have the HashMap contain the indices of the named elements it references.
struct State {
named: BTreeMap<String, usize>,
stack: Vec<Object>
}
I'd also remove all lifetimes from your example, as this can be done completely with owned objects.
enum Object {
Str(String),
Int(i32),
Float(f32),
Vector(Vec<Object>),
Prim(fn(State) -> State)
}
You can try out a working implementation in the Playground
Supporting links:
How to initialize struct fields which reference each other
using self in new constructor
How to design a struct when I need to reference to itself
How to store HashMap and its Values iterator in the same struct?
What lifetimes do I use to create Rust structs that reference each other cyclically?
How to store a SqliteConnection and SqliteStatement objects in the same struct in Rust?
Why can't I store a value and a reference to that value in the same struct?
https://stackoverflow.com/questions/33123634/reference-inside-struct-to-object-it-owns

Go reflection with interface embedded in struct - how to detect "real" functions?

The situation I have now is the same as was asked about in this thread: Meaning of a struct with embedded anonymous interface?
type A interface {
Foo() string
}
type B struct {
A
bar string
}
Idiomatically, coming from a backround in OOP languages, what it looks like this pattern is "trying to say" to me is that B must implement interface A. But I get by now that "Go is different". So, rather than the compile-time check I expected at first, this is happy to compile with or without a
func (B) Foo() string { .... }
present. As the above question points out (paraphrased): "using embedded interfaces in structs is great for when you only want to implement /part/ of an interface".
Presumably, this is because what is happening with this embed is just like in every other case - a value of type B would have an anonymous interface value of type A, as a field. Personally while I find that orthogonality comforting, I also find it confusing that the reflection package would then let me get methods of A directly from B's type this way, and not error/nil if no method with receiver B is present. But - this question isn't about the thinking behind that - it is about how that interface value is initialized after b := B{}:
func main() {
bType := reflect.TypeOf(B{})
bMeth, has := bType.MethodByName("Foo")
if has {
fmt.Printf("HAS IT: %s\n",bMeth.Type.Kind())
res := bMeth.Func.Call([]reflect.Value{reflect.ValueOf(B{})})
val := res[0].Interface()
fmt.Println(val)
} else {
fmt.Println("DOESNT HAS IT")
}
}
When this is run, it causes a horrible panic
HAS IT: func
panic: runtime error: invalid memory address or nil pointer dereference
... or doesn't - depending on if the compiler/runtime was able to find the above method. So: How can I detect that situation before I trigger it?
That is - is there something about the bMeth value I can use to see that there is no "real" implementation present in the reflection-returned returned Method and func values? Is that more precisely something like "is the pointer to the function in the function table of the anonymous interface value in zero", or what exactly is going on with methods you pull from an interface with reflection where there is no implementation?
Wrapping the whole thing in a goroutine and attempting to run the function under defer/panic isn't the answer - not only because of the expense of the panic/defer but because the function in general might, if it does exist, have side effects I don't want right now...
Do I want something like a run-time implementation that mirrors the compiler's type check? Or is there an easier way? Am I thinking about this incorrectly?
Above example in a Go playground
You needn't reflection to my mind
method_in_table := B.Foo
fmt.Printf("%T \n", method_in_table)
will output you
func(main.B) string
Interface type A initialized at predeclared nil which has no dynamic type
var a A
if a==nil{
fmt.Printf("It's nil")
}
a.Foo()
will give you same error. So practical check can be just
if b.A != nil { b.Foo()}
This question is old with some good answers, but none presents the possibility that this can be done.
Before presenting the solution: I think it's not your job to make sure the implementation does not panic because it fails to set an embedded interface field. Someone could pass an implementation which explicitly defines the methods in which
panic() is called explicitly. You could not detect that case, yet, that implementation wouldn't be any better than a nil embedded interface field.
OK, so how to tell if a method cannot be called because it would panic due to the implementation not being available because the embedded interface field is nil?
You said you can't / don't want to call the method and recover from a panic because if the method is available, this would call it and have its side effect.
The fact is that we don't have to call it. We can just refer to the method via an instance (not type), and then the actual receiver has to be resolved. Of course if the receiver would be the dynamic value of an embedded interface, and if that interface is nil, the resolving will cause a runtime panic, but the method will not be called even if the embedded interface is not nil. Note that this is in fact a Method value, and obtaining a method value evaluates and saves the receiver. This receiver evaluation is what will fail.
Let's see an example:
type A interface {
Foo() string
}
type B struct {
A
}
func (b B) Int() int {
fmt.Println("B.Int() called")
return 0
}
func main() {
b := B{}
_ = b.Int
fmt.Println("We got this far, b.Int is realized")
}
What will this program output? Only "We got this far, b.Int is realized". Because the Int() method is explicitly defined for the B type, and so b.Int can be resolved. And since it's not called, "B.Int() called" will not be printed.
What if we do this:
_ = b.Foo
Since Foo is a promoted method from B.A embedded interface, and b.A is nil, resolving b.Foo will fail at runtime, and produce a runtime error, something like this:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x47d382]
goroutine 1 [running]:
main.main()
/tmp/sandbox877757882/prog.go:24 +0x2
But we can recover from this:
defer func() {
if r := recover(); r != nil {
fmt.Println("Recovered:", r)
fmt.Println("This means b.Foo is not realized!")
}
}()
_ = b.Foo
This will output:
Recovered: runtime error: invalid memory address or nil pointer dereference
This means b.Foo is not realized!
Try the examples on the Go Playground.
Let me put my two cents in, after you've already received good answers for your question.
Presumably, this is because what is happening with this embed is just like in every other case - a value of type B would have an anonymous interface value of type A, as a field.
You've basically solved the problem here. This is just a field, but because it's anonymous all its methods are being promoted and you can use them directly on the struct. This is not only related to interfaces, but the problem you've pointed to exists within ordinary structures as well:
package main
type A struct {
}
func (a A) Foo() {
}
type B struct {
*A
}
func main() {
B{}.Foo()
}
This will cause panic. I believe this is expected: we're saying B embeds *A, but then leave it uninitialised, so what am I thinking? We could try to find an analogy here with, for example, C++ and find out it is similar to a null pointer in C++ – how do we deal with it there? We either expect it to be non-null (by a contract) or need to check before using. The latter it what Uvelichitel suggested in the accepted answer and it's by no means correct and there is no better solution I think. Although it's not very plausible. We do expect the caller to know the method they're calling is a promoted method of an anonymous field which is a pointer (or interface) type and as such can be nil. As an author of such code I would either need to make sure it's never nil (contract) or state it clearly in documentation that a caller needs to check it (but why would I embed this type then instead of having normal field, I'm not sure).
It bothers me with interfaces though, because looking back at your example and making A an interface, we have a following problem:
package main
import "fmt"
type A interface {
Foo()
}
type B struct {
A
}
func main() {
var b interface{}
b = &B{}
// Nicely check whether interface is implemented
if a, ok := b.(A); ok {
a.Foo()
}
}
Whoops, panic. I explicitly don't use reflect package here to indicate your problem exists within "normal" language usage. I have an interface object b and want to check whether it implements interface A. The answer is yes, but I'm getting panic. Who is to blame? I would feel much more comforting saying the creator of object behind the interface b who advertise some functionality, but don't care to provide the implementation. As such I would like it to call a bad practice or at least force it to be clearly stated in the documentation rather than assuming ok in the above type assertion means actually ok.
It's getting too long and off topic I think. My answer to your question is then a mixture of already given answers: directly check A is not null and if it's not possible (you don't know the exact field promoting the method), hope for the best and blame someone else.
I don't think this is possible. From what I can see in reflect's documentation and code, there is no way to know, whether a method is defined on the type or promoted. Seems like panic-recover is the best you can do here.
There are 3 questions here.
An embedded interface does not mean "implements A". It's exactly the same as embedding any other type of object. If you want to implement A, just make a method: func (b B) Foo() string.
When you say:
using embedded interfaces in structs is great for when you only want to
implement /part/ of an interface
That does work, but you have to make sure to create the object properly. Think of it like wrapping an existing object:
type MyReadCloser struct {
io.ReadCloser
}
func (mrc *MyReadCloser) Read(p []byte) (int64, error) {
// do your custom read logic here
}
// you get `Close` for free
func main() {
// assuming we have some reader
var rc io.ReadCloser
// you have to build the object like this:
myReader := MyReadCloser{rc}
}
I'm not sure how Go does it internally, but conceptually it's as if it creates a Close method for you:
func (mrc *MyReadCloser) Close() error {
return mrc.ReadCloser.Close()
}
The panic is because A is nil. If you had:
type concrete string
func (c concrete) Foo() string {
return string(c)
}
func main() {
b := B{A: c("test")}
// etc...
}
It would work. In other words when you call:
bMeth.Func.Call([]reflect.Value{reflect.ValueOf(B{})})
That's:
B{}.Foo()
Which is:
B{}.A.Foo()
And A is nil so you get a panic.
As to the question about how to get only the methods directly implemented by an object (not methods implemented by an embedded field), I wasn't able to see a way using the reflect library. MethodByName gives no indication:
<func(main.B) string Value>
Internally that's basically a function like this:
func(b B) string {
return b.A.Foo()
}
And I don't think there's anything in reflect that allows you to peer into the internals of a function. I tried looping over the fields, grabbing their methods and comparing the two, but that doesn't work either.

Using Recursive References in Go

I want to contain all my commands in a map and map from the command to a function doing the job (just a standard dispatch table). I started with the following code:
package main
import "fmt"
func hello() {
fmt.Print("Hello World!")
}
func list() {
for key, _ := range whatever {
fmt.Print(key)
}
}
var whatever = map[string](func()) {
"hello": hello,
"list": list,
}
However, it fails to compile because there is a recursive reference between the function and the structure. Trying to forward-declare the function fails with an error about re-definition when it is defined, and the map is at top-level. How do you define structures like this and initialize them on top level without having to use an init() function.
I see no good explanation in the language definition.
The forward-reference that exists is for "external" functions and it does not compile when I try to forward-declare the function.
I find no way to forward-declare the variable either.
Update: I'm looking for a solution that do not require you to populate the variable explicitly when you start the program nor in an init() function. Not sure if that is possible at all, but it works in all comparable languages I know of.
Update 2: FigmentEngine suggested an approach that I gave as answer below. It can handle recursive types and also allow static initialization of the map of all commands.
As you might already have found, the Go specifications states (my emphasis):
if the initializer of A depends on B, A will be set after B. Dependency analysis does not depend on the actual values of the items being initialized, only on their appearance in the source. A depends on B if the value of A contains a mention of B, contains a value whose initializer mentions B, or mentions a function that mentions B, recursively. It is an error if such dependencies form a cycle.
So, no, it is not possible to do what you are trying to do. Issue 1817 mentions this problem, and Russ Cox does say that the approach in Go might occasionally be over-restrictive. But it is clear and well defined, and workarounds are available.
So, the way to go around it is still by using init(). Sorry.
Based on the suggestion by FigmentEngine above, it is actually possible to create a statically initialized array of commands. You have, however, to pre-declare a type that you pass to the functions. I give the re-written example below, since it is likely to be useful to others.
Let's call the new type Context. It can contain a circular reference as below.
type Context struct {
commands map[string]func(Context)
}
Once that is done, it is possible to declare the array on top level like this:
var context = Context {
commands: map[string]func(Context) {
"hello": hello,
"list": list,
},
}
Note that it is perfectly OK to refer to functions defined later in the file, so we can now introduce the functions:
func hello(ctx Context) {
fmt.Print("Hello World!")
}
func list(ctx Context) {
for key, _ := range ctx.commands {
fmt.Print(key)
}
}
With that done, we can create a main function that will call each of the functions in the declared context:
func main() {
for key, fn := range context.commands {
fmt.Printf("Calling %q\n", key)
fn(context)
}
}
Just populate the map inside a function before using list(). Like that.
Sry I did not see that you wrote "without init()": that is not possible.

Resources