I'm working on a toy programming JavaScript-like language which I'm writing in Rust, and this language has a very basic mark and sweep garbage collector. I have a working prototype, and the way I implemented it is to wrap Rust objects which are going to be allocated on the GC heap inside of a Box. Then I can get a pointer to the GC'd object and wrap pointer into a dynamically-typed Value type, i.e.:
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum Value
{
Int64(i64),
UInt64(u64),
HostFn(HostFn),
Fun(*mut Function),
Str(*mut String),
Nil,
}
These are the values which my bytecode interpreter works with. The full source code is here if anyone wants to see more about how the GC works and provide feedback. I'm trying to keep the implementation as simple as possible so that this will be beginner-friendly.
What I would like advice with is that working with mutable pointers like this in Rust is very unergonomic. Every time I want to dereference them, I have to wrap everything in unsafe. This means, if I want to implement string concatenation or equality, I have to wrap the whole thing in an unsafe block where I'll dereference and borrow both strings. I have to do the same thing if I want to implement function calls. That's going to result in unsafe blocks everywhere in the implementation of my interpreter.
So my question is: can we make this more ergonomic somehow? One potential idea would be to have a helper method on Value that can return a &'static mut String for example. This is essentially lying to the Rust compiler about lifetimes for the sake of convenience, because the programmer still needs to be careful that GC'd values are on the VM heap, otherwise they will be collected. Would this be safe though? The real danger would come if the Rust compiler were to reorder operations around. Advice welcome.
In go I seem to have two options:
foo := Thing{}
foo.bar()
foo := &Thing{}
foo.bar()
func (self Thing) bar() {
}
func (self *Thing) bar() {
}
What's the better way to define my funcs with self Thing or with self *Thing?
Edit: this is not a duplicate of the question about methods and functions. This question has to do with Thing and &Thing and I think it's different enough to warrent it's own url.
Take a look at this item from the official FAQ:
For programmers unaccustomed to pointers, the distinction between
these two examples can be confusing, but the situation is actually
very simple. When defining a method on a type, the receiver (s in the
above examples) behaves exactly as if it were an argument to the
method. Whether to define the receiver as a value or as a pointer is
the same question, then, as whether a function argument should be a
value or a pointer. There are several considerations.
First, and most important, does the method need to modify the
receiver? If it does, the receiver must be a pointer. (Slices and maps
act as references, so their story is a little more subtle, but for
instance to change the length of a slice in a method the receiver must
still be a pointer.) In the examples above, if pointerMethod modifies
the fields of s, the caller will see those changes, but valueMethod is
called with a copy of the caller's argument (that's the definition of
passing a value), so changes it makes will be invisible to the caller.
By the way, pointer receivers are identical to the situation in Java,
although in Java the pointers are hidden under the covers; it's Go's
value receivers that are unusual.
Second is the consideration of efficiency. If the receiver is large, a
big struct for instance, it will be much cheaper to use a pointer
receiver.
Next is consistency. If some of the methods of the type must have
pointer receivers, the rest should too, so the method set is
consistent regardless of how the type is used. See the section on
method sets for details.
For types such as basic types, slices, and small structs, a value
receiver is very cheap so unless the semantics of the method requires
a pointer, a value receiver is efficient and clear.
There isn't a clear answer but they're completely different. When you don't use a pointer you 'pass by value' meaning the object you called it on will be immutable (modifying a copy), when you use the pointer you 'pass by reference'. I would say more often you use the pointer variety but it is completely situational, there is no 'better way'.
If you look at various programming frameworks/class libraries you will see many examples where the authors have deliberately chosen to do things by value or reference. For example, in C# .NET this is the fundamental difference between a struct and a class and types like Guid and DateTime were deliberately implemented as structs (value type). Again, I think the pointer is more often the better choice (if you look through .NET almost everything is a class, the reference type), but it definitely depends on what you wish to achieve with the type and/or how you want consumers/other developers to interact with it. Your may need to consider performance and concurrency (maybe you want everything to be by value so you don't have to worry about concurrent ops on a type, maybe you need a pointer because the objects memory footprint is large and copying it would make your program too slow or consumptive).
I have read somewhere that in a language that features pointers, it is not possible for the compiler to decide fully at compile time whether all pointers are used correctly and/or are valid (refer to an alive object) for various reasons, since that would essentially constitute solving the halting problem. That is not surprising, intuitively, because in this case, we would be able to infer the runtime behavior of a program during compile-time, similarly to what's stated in this related question.
However, from what I can tell, the Rust language requires that pointer checking be done entirely at compile time (there's no undefined behavior related to pointers, "safe" pointers at least, and there's no "invalid pointer" or "null pointer" runtime exception either).
Assuming that the Rust compiler doesn't solve the halting problem, where does the fallacy lie?
Is it the case that pointer checking isn't done entirely at compile-time, and Rust's smart pointers still introduce some runtime overhead compared to, say, raw pointers in C?
Or is it possible that the Rust compiler can't make fully correct decisions, and it sometimes needs to Just Trust The Programmer™, probably using one of the lifetime annotations (the ones with the <'lifetime_ident> syntax)? In this case, does this mean that the pointer/memory safety guarantee is not 100%, and still relies on the programmer writing correct code?
Another possibility is that Rust pointers are non-"universal" or restricted in some sense, so that the compiler can infer their properties entirely during compile-time, but they are not as useful as e. g. raw pointers in C or smart pointers in C++.
Or maybe it is something completely different and I'm misinterpreting one or more of { "pointer", "safety", "guaranteed", "compile-time" }.
Disclaimer: I'm in a bit of a hurry, so this is a bit meandering. Feel free to clean it up.
The One Sneaky Trick That Language Designers Hate™ is basically this: Rust can only reason about the 'static lifetime (used for global variables and other whole-program lifetime things) and the lifetime of stack (i.e. local) variables: it cannot express or reason about the lifetime of heap allocations.
This means a few things. First of all, all of the library types that deal with heap allocations (i.e. Box<T>, Rc<T>, Arc<T>) all own the thing they point to. As a result, they don't actually need lifetimes in order to exist.
Where you do need lifetimes is when you're accessing the contents of a smart pointer. For example:
let mut x: Box<i32> = box 0;
*x = 42;
What is happening behind the scenes on that second line is this:
{
let box_ref: &mut Box<i32> = &mut x;
let heap_ref: &mut i32 = box_ref.deref_mut();
*heap_ref = 42;
}
In other words, because Box isn't magic, we have to tell the compiler how to turn it into a regular, run of the mill borrowed pointer. This is what the Deref and DerefMut traits are for. This raises the question: what, exactly, is the lifetime of heap_ref?
The answer to this is in the definition of DerefMut (from memory because I'm in a hurry):
trait DerefMut {
type Target;
fn deref_mut<'a>(&'a mut self) -> &'a mut Target;
}
Like I said before, Rust absolutely cannot talk about "heap lifetimes". Instead, it has to tie the lifetime of the heap-allocated i32 to the only other lifetime it has on hand: the lifetime of the Box.
What this means is that "complicated" things don't have an expressible lifetime, and thus have to own the thing they manage. When you convert a complicated smart pointer/handle into a simple borrowed pointer, that is the moment that you have to introduce a lifetime, and you usually just use the lifetime of the handle itself.
Actually, I should clarify: by "lifetime of the handle", I really mean "the lifetime of the variable in which the handle is currently being stored": lifetimes are really for storage, not for values. This is typically why newcomers to Rust get tripped up when they can't work out why they can't do something like:
fn thingy<'a>() -> (Box<i32>, &'a i32) {
let x = box 1701;
(x, &x)
}
"But... I know that the box will continue to live on, why does the compiler say it doesn't?!" Because Rust can't reason about heap lifetimes and must resort to tying the lifetime of &x to the variable x, not the heap allocation it happens to point to.
Is it the case that pointer checking isn't done entirely at compile-time, and Rust's smart pointers still introduce some runtime overhead compared to, say, raw pointers in C?
There are special runtime-checks for things that can't be checked at compile time. These are usually found in the cell crate. But in general, Rust checks everything at compile time and should produce the same code as you would in C (if your C-code isn't doing undefined stuff).
Or is it possible that the Rust compiler can't make fully correct decisions, and it sometimes needs to Just Trust The Programmer™, probably using one of the lifetime annotations (the ones with the <'lifetime_ident> syntax)? In this case, does this mean that the pointer/memory safety guarantee is not 100%, and still relies on the programmer writing correct code?
If the compiler cannot make the correct decision you get a compile time error telling you that the compiler cannot verify what you are doing. This might also restrict you from stuff you know is correct, but the compiler doesn't. You can always go to unsafe code in that case. But as you correctly assumed, then the compiler relies partly on the programmer.
The compiler checks the function's implementation, to see if it does exactly what the lifetimes say it does. Then, at the call-site of the function, it checks if the programmer uses the function correctly. This is similar to type-checking. A C++ compiler checks if you are returning an object of the correct type. Then it checks at the call-site if the returned object is stored in a variable of the correct type. At no time can the programmer of a function break the promise (except if unsafe is used, but you can always let the compiler enforce that no unsafe is used in your project)
Rust is continuously improved. More things may get legal in Rust once the compiler becomes smarter.
Another possibility is that Rust pointers are non-"universal" or restricted in some sense, so that the compiler can infer their properties entirely during compile-time, but they are not as useful as e. g. raw pointers in C or smart pointers in C++.
There's a few things that can go wrong in C:
dangling pointers
double free
null pointers
wild pointers
These don't happen in safe Rust.
You can never have a pointer that points to an object no longer on the stack or heap. That's proven at compile time through lifetimes.
You do not have manual memory management in Rust. Use a Box to allocate your objects (similar but not equal to a unique_ptr in C++)
Again, no manual memory management. Boxes automatically free memory.
In safe Rust you can create a pointer to any location, but you cannot dereference it. Any reference you create always is bound to an object.
There's a few things that can go wrong in C++:
everything that can go wrong in C
SmartPointers only help you not forget calling free. You can still create dangling references: auto x = make_unique<int>(42);
auto& y = *x;
x.reset();
y = 99;
Rust fixes those:
see above
as long as y exists, you may not modify x. This is checked at compile time and cannot be circumvented by more levels of indirection or structs.
I have read somewhere that in a language that features pointers, it is not possible for the compiler to decide fully at compile time whether all pointers are used correctly and/or are valid (refer to an alive object) for various reasons, since that would essentially constitute solving the halting problem.
Rust doesn't prove you all pointers are used correctly. You may still write bogus programs. Rust proves that you are not using invalid pointers. Rust proves that you never have null-pointers. Rust proves that you never have two pointers to the same object, execept if all these pointers are non-mutable (const). Rust does not allow you to write any program (since that would include programs that violate memory safety). Right now Rust still prevents you from writing some useful programs, but there are plans to allow more (legal) programs to be written in safe Rust.
That is not surprising, intuitively, because in this case, we would be able to infer the runtime behavior of a program during compile-time, similarly to what's stated in this related question.
Revisiting the example in your referenced question about the halting problem:
void foo() {
if (bar() == 0) this->a = 1;
}
The above C++ code would look one of two ways in Rust:
fn foo(&mut self) {
if self.bar() == 0 {
self.a = 1;
}
}
fn foo(&mut self) {
if bar() == 0 {
self.a = 1;
}
}
For an arbitrary bar you cannot prove this, because it might access global state. Rust soon gets const functions, which can be used to compute stuff at compile-time (similar to constexpr). If bar is const, it becomes trivial to prove if self.a is set to 1 at compile-time. Other than that, without pure functions or other restrictions of the function content, you can never prove whether self.a is set to 1 or not.
Rust currently doesn't care whether your code is called or not. It cares whether the memory of self.a still exists during the assignment. self.bar() can never destroy self (except in unsafe code). Therefor self.a will always be available inside the if branch.
Most of the safety of Rust references is guaranteed by strict rules:
If you posses a const reference (&), you can clone this reference and pass it around, but not create a mutable &mut reference out of it.
If a mutable (&mut) reference to an object exists, no other reference to this object can exist.
A reference is not allowed to outlive the object it refers to, and all functions manipulating references must declare how the references from their input and output are linked, using lifetime annotations (like 'a).
So in terms of expressiveness, we are effectively more limited than when using plain raw pointers (for example, building a graph structure is not possible using only safe references), but these rules can effectively be completely checked at compile-time.
Yet, it is still possible to use raw pointers, but you have to enclose the code dealing with them in a unsafe { /* ... */ } block, telling to the compiler "Trust me, I know what I am doing here". That is what some special smart pointers do internally, such as RefCell, which allows you to have these rules checked at runtime rather than compile-time, to gain expressiveness.
Given the following struct:
type Exp struct {
foo int,
bar *int
}
What is the difference in term of performance when using a pointer or a value in a struct. Is there any overhead or this just two schools of Go programming?
I would use pointers to implement a chained struct but is this the only case we have to use pointers in struct in order to gain performance?
PS: in the above struct we talk about a simple int but it could be any other type (even custom one)
Use the form which is most functionally useful for your program. Basically, this means if it's useful for the value to be nil, then use a pointer.
From a performance perspective, primitive numeric types are always more efficient to copy than to dereference a pointer. Even more complex data structures are still usually faster to copy if they are smaller than a cache line or two (under 128 bytes is a good rule of thumb for x86 CPUs).
When things get a little larger, you need to benchmark if performance concerns you. CPUs are very efficient at copying data, and there are so many variables involved which will determine the locality and cache friendliness of your data, it really depends on your program's behavior, and the hardware you're using.
This is an excellent series of articles if you want to better understand the how memory and software interact: "What every programmer should know about memory".
In short, I tell people to choose a pointer or not based on the logic of the program, and worry about performance later.
Use a pointer if you need to pass something to be modified.
Use a pointer if you need to determine if something was unset/nil.
Use a pointer if you are using a type that has methods with pointer receivers.
If the size of a pointer is less than the struct member, then using a pointer is more efficient since you don't need to copy the member but just its address. Also, if you want to be able to move or share some part of a structure, it is better to have a pointer so that you can, again, only share the address of the member. See also the golang faqs.
Functional programming attempts to ensure freedom from side-effects when invoking functions. Passing a mutable object as a parameter that is NOT returned gives a developer the ability to modify a non-returned object. In other words, side-effects are possible. If you pass primitive or immutable types, you are free from this extra layer of complexity when reasoning about a program. So does good Functional Programming practice dictate anything when you have the choice between passing a mutable object vs passing primitive types?
I am aware of the following properties of PURE functional programming:
If all your types are immutable, it doesn't matter whether they are object types
The mutable object cannot exist outside the scope where it is expected to be mutated
but in reality when you are dealing with huge code bases that have existed prior to Functional Programming gaining a newfound respect, you will often be dealing with data structures that have no corresponding immutable representation, and code will be a mixture of functional and imperative styles.
(Background - I am exposing C++ logic via Java Native Interface and am wondering when to avoid passing objects, but I'm not looking for an answer that directly applies to this use case. I just want some help applying good programming practice in a situation such as this)
I think the only thing that functional programming has to say about this matter is that you should ideally wrap all impure stuff that you expose, in a monad, such as the famous IO monad.
But this is probably too radical for your users.