Is it possible to declare a tuple struct whose members are private, except for initialization? - initialization

Is it possible to declare a tuple struct where the members are hidden for all intents and purposes, except for declaring?
// usize isn't public since I don't want users to manipulate it directly
struct MyStruct(usize);
// But now I can't initialize the struct using an argument to it.
let my_var = MyStruct(0xff)
// ^^^^
// How to make this work?
Is there a way to keep the member private but still allow new structs to be initialized with an argument as shown above?
As an alternative, a method such as MyStruct::new can be implemented, but I'm still interested to know if its possible to avoid having to use a method on the type since it's shorter, and nice for types that wrap a single variable.
Background
Without going into too many details, the only purpose of this type is to wrap a single type (a helper which hides some details, adds some functionality and is optimized away completely when compiled), in this context it's not exactly exposing hidden internals to use the Struct(value) style initializing.
Further, since the wrapper is zero overhead, its a little misleading to use the new method which is often associated with allocation/creation instead of casting.
Just as it's convenient type (int)v or int(v), instead of int::new(v), I'd like to do this for my own type.
It's used often, so the ability to use short expression is very convenient. Currently I'm using a macro which calls a new method, its OK but a little awkward/indirect, hence this question.

Strictly speaking this isn't possible in Rust.
However the desired outcome can be achieved using a normal struct with a like-named function (yes, this works!)
pub struct MyStruct {
value: usize,
}
#[allow(non_snake_case)]
pub fn MyStruct(value: usize) -> MyStruct {
MyStruct { value }
}
Now, you can write MyStruct(5) but not access the internals of MyStruct.

I'm afraid that such a concept is not possible, but for a good reason. Each member of a struct, unless marked with pub, is admitted as an implementation detail that should not raise to the surface of the public API, regardless of when and how the object is currently being used. Under this point of view, the question's goal reaches a conundrum: wishing to keep members private while letting the API user define them arbitrarily is not only uncommon but also not very sensible.
As you mentioned, having a method named new is the recommended approach of doing that. It's not like you're compromising code readability with the extra characters you have to type. Alternatively, for the case where the struct is known to wrap around an item, making the member public can be a possible solution. That, on the other hand, would allow any kind of mutations through a mutable borrow (thus possibly breaking the struct's invariants, as mentioned by #MatthieuM). This decision depends on the intended API.

Related

Add element to immutable vector rust

I am trying to create a user input validation function in rust utilising functional programming and recursion. How can I return an immutable vector with one element concatenated onto the end?
fn get_user_input(output_vec: Vec<String>) -> Vec<String> {
// Some code that has two variables: repeat(bool) and new_element(String)
if !repeat {
return output_vec.add_to_end(new_element); // What function could "add_to_end" be?
}
get_user_input(output_vec.add_to_end(new_element)) // What function could "add_to_end" be?
}
There are functions for everything else:
push adds a mutable vector to a mutable vector
append adds an element to the end of a mutable vector
concat adds an immutable vector to an immutable vector
??? adds an element to the end of a immutable vector
The only solution I have been able to get working is using:
[write_data, vec![new_element]].concat()
but this seems inefficient as I'm making a new vector for just one element (so the size is known at compile time).
You are confusing Rust with a language where you only ever have references to objects. In Rust, code can have exclusive ownership of objects, and so you don't need to be as careful about mutating an object that could be shared, because you know whether or not the object is shared.
For example, this is valid JavaScript code:
const a = [];
a.push(1);
This works because a does not contain an array, it contains a reference to an array.1 The const prevents a from being repointed to a different object, but it does not make the array itself immutable.
So, in these kinds of languages, pure functional programming tries to avoid mutating any state whatsoever, such as pushing an item onto an array that is taken as an argument:
function add_element(arr) {
arr.push(1); // Bad! We mutated the array we have a reference to!
}
Instead, we do things like this:
function add_element(arr) {
return [...arr, 1]; // Good! We leave the original data alone.
}
What you have in Rust, given your function signature, is a totally different scenario! In your case, output_vec is owned by the function itself, and no other entity in the program has access to it. There is therefore no reason to avoid mutating it, if that is your goal:
fn get_user_input(mut output_vec: Vec<String>) -> Vec<String> {
// Add mut ^^^
You have to keep in mind that any non-reference is an owned value. &Vec<String> would be an immutable reference to a vector something else owns, but Vec<String> is a vector this code owns and nobody else has access to.
Don't believe me? Here's a simple example of broken code that demonstrates this:
fn take_my_vec(y: Vec<String>) { }
fn main() {
let mut x = Vec::<String>::new();
x.push("foo".to_string());
take_my_vec(x);
println!("{}", x.len()); // E0382
}
The expression x.len() causes a compile-time error, because the vector x was moved into the function argument and we don't own it anymore.
So why shouldn't the function mutate the vector it owns now? The caller can't use it anymore.
In summary, functional programming looks a bit different in Rust. In other languages that have no way to communicate "I'm giving you this object" you must avoid mutating values you are given because the caller may not expect you to change them. In Rust, who owns a value is clear, and the argument reflects that:
Is the argument a value (Vec<String>)? The function owns the value now, the caller gave it away and can't use it anymore. Mutate it if you need to.
Is the argument an immutable reference (&Vec<String>)? The function doesn't own it, and it can't mutate it anyway because Rust won't allow it. You could clone it and mutate the clone.
Is the argument a mutable reference (&mut Vec<String>)? The caller must explicitly give the function a mutable reference and is therefore giving the function permission to mutate it -- but the function still doesn't own the value. The function can mutate it, clone it, or both -- it depends what the function is supposed to do.
If you take an argument by value, there is very little reason not to make it mut if you need to change it for whatever reason. Note that this detail (mutability of function arguments) isn't even part of the function's public signature simply because it's not the caller's business. They gave the object away.
Note that with types that have type arguments (like Vec) other expressions of ownership are possible. Here are a few examples (this is not an exhaustive list):
Vec<&String>: You now own a vector, but you don't own the String objects that it contains references to.
&Vec<&String>: You are given read-only access to a vector of string references. You could clone this vector, but you still couldn't change the strings, just rearrange them, for example.
&Vec<&mut String>: You are given read-only access to a vector of mutable string references. You can't rearrange the strings, but you can change the strings themselves.
&mut Vec<&String>: Like the above but opposite: you are allowed to rearrange the string references but you can't change the strings.
1 A good way to think of it is that non-primitive values in JavaScript are always a value of Rc<RefCell<T>>, so you're passing around a handle to the object with interior mutability. const only makes the Rc<> immutable.

Rust, std::cell::Cell - get immutable reference to inner data

Looking through the documentation for std::cell::Cell, I don't see anywhere how I can retrieve a non-mutable reference to inner data. There is only the get_mut method: https://doc.rust-lang.org/std/cell/struct.Cell.html#method.get_mut
I don't want to use this function because I want to have &self instead of &self mut.
I found an alternative solution of taking the raw pointer:
use std::cell::Cell;
struct DbObject {
key: Cell<String>,
data: String
}
impl DbObject {
pub fn new(data: String) -> Self {
Self {
key: Cell::new("some_uuid".into()),
data,
}
}
pub fn assert_key(&self) -> &str {
// setup key in the future if is empty...
let key = self.key.as_ptr();
unsafe {
let inner = key.as_ref().unwrap();
return inner;
}
}
}
fn main() {
let obj = DbObject::new("some data...".into());
let key = obj.assert_key();
println!("Key: {}", key);
}
Is there any way to do this without using unsafe? If not, perhaps RefCell will be more practical here?
Thank you for help!
First of, if you have a &mut T, you can trivially get a &T out of it. So you can use get_mut to get &T.
But to get a &mut T from a Cell<T> you need that cell to be mutable, as get_mut takes a &mut self parameter. And this is by design the only way to get a reference to the inner object of a cell.
By requiring the use of a &mut self method to get a reference out of a cell, you make it possible to check for exclusive access at compile time with the borrow checker. Remember that a cell enables interior mutability, and has a method set(&self, val: T), that is, a method that can modify the value of a non-mut binding! If there was a get(&self) -> &T method, the borrow checker could not ensure that you do not hold a reference to the inner object while setting the object, which would not be safe.
TL;DR: By design, you can't get a &T out of a non-mut Cell<T>. Use get_mut (which requires a mut cell), or set/replace (which work on a non-mut cell). If this is not acceptable, then consider using RefCell, which can get you a &T out of a non-mut instance, at some runtime cost.
In addition to to #mcarton answer, in order to keep interior mutability sound, that is, disallow mutable reference to coexist with other references, we have three different ways:
Using unsafe with the possibility of Undefined Behavior. This is what UnsafeCell does.
Have some runtime checks, involving runtime overhead. This is the approach RefCell, RwLock and Mutex use.
Restrict the operations that can be done with the abstraction. This is what Cell, Atomic* and (the unstable) OnceCell (and thus Lazy that uses it) does (note that the thread-safe types also have runtime overhead because they need to provide some sort of locking). Each provides a different set of allowed operations:
Cell and Atomic* do not let you to get a reference to the contained value, and only replace it as whole (basically, get() and set, though convenience methods are provided on top of these, such as swap()). Projection (cell-of-slice to slice-of-cells) is also available for Cell (field projection is possible, but not provided as part of std).
OnceCell allows you to assign only once and only then take shared reference, guaranteeing that when you assign you have no references and while you have shared references you cannot assign anymore.
Thus, when you need to be able to take a reference into the content, you cannot choose Cell as it was not designed for that - the obvious choice is RefCell, indeed.

Treating single and multiple elements the same way ("transparent" map operator)

I'm working on a programming language that is supposed to be easy, intuitive, and succinct (yeah, I know, I'm the first person to ever come up with that goal ;-) ).
One of the features that I am considering for simplifying the use of container types is to make the methods of the container's element type available on the container type itself, basically as a shortcut for invoking a map(...) method. The idea is that working with many elements should not be different from working with a single element: I can apply add(5) to a single number or to a whole list of numbers, and I shouldn't have to write slightly different code for the "one" versus the "many" scenario.
For example (Java pseudo-code):
import static java.math.BigInteger.*; // ZERO, ONE, ...
...
// NOTE: BigInteger has an add(BigInteger) method
Stream<BigInteger> numbers = Stream.of(ZERO, ONE, TWO, TEN);
Stream<BigInteger> one2Three11 = numbers.add(ONE); // = 1, 2, 3, 11
// this would be equivalent to: numbers.map(ONE::add)
As far as I can tell, the concept would not only apply to "container" types (streams, lists, sets...), but more generally to all functor-like types that have a map method (e.g., optionals, state monads, etc.).
The implementation approach would probably be more along the lines of syntactic sugar offered by the compiler rather than by manipulating the actual types (Stream<BigInteger> obviously does not extend BigInteger, and even if it did the "map-add" method would have to return a Stream<BigInteger> instead of an Integer, which would be incompatible with most languages' inheritance rules).
I have two questions regarding such a proposed feature:
(1) What are the known caveats with offering such a feature? Method name collisions between the container type and the element type are one problem that comes to mind (e.g., when I call add on a List<BigInteger> do I want to add an element to the list or do I want to add a number to all elements of the list? The argument type should clarify this, but it's something that could get tricky)
(2) Are there any existing languages that offer such a feature, and if so, how is this implemented under the hood? I did some research, and while pretty much every modern language has something like a map operator, I could not find any languages where the one-versus-many distinction would be completely transparent (which leads me to believe that there is some technical difficulty that I'm overlooking here)
NOTE: I am looking at this in a purely functional context that does not support mutable data (not sure if that matters for answering these questions)
Do you come from an object-oriented background? That's my guess because you're thinking of map as a method belonging to each different "type" as opposed to thinking about various things that are of the type functor.
Compare how TypeScript would handle this if map were a property of each individual functor:
declare someOption: Option<number>
someOption.map(val => val * 2) // Option<number>
declare someEither: Either<string, number>
someEither.map(val => val * 2) // Either<string,number>
someEither.mapLeft(string => 'ERROR') // Either<'ERROR', number>
You could also create a constant representing each individual functor instance (option, array, identity, either, async/Promise/Task, etc.), where these constants have map as a method. Then have a standalone map method that takes one of those "functor constant"s, the mapping function, and the starting value, and returns the new wrapped value:
const option: Functor = {
map: <A, B>(f: (a:A) => B) => (o:Option<A>) => Option<B>
}
declare const someOption: Option<number>
map(option)(val => val * 2)(someOption) // Option<number>
declare const either: Functor = {
map: <E, A, B>(f: (a:A) => B) => (e:Either<E, A>) => Either<E, B>
}
declare const either: Either<string,number>
map(either)(val => val * 2)(someEither)
Essentially, you have a functor "map" that uses the first parameter to identify which type you're going to be mapping, and then you pass in the data and the mapping function.
However, with proper functional languages like Haskell, you don't have to pass in that "functor constant" because the language will apply it for you. Haskell does this. I'm not fluent enough in Haskell to write you the examples, unfortunately. But that's a really nice benefit that means even less boilerplate. It also allows you to write a lot of your code in what is "point free" style, so refactoring becomes much easier if you make your language so you don't have to manually specify the type being used in order to take advantage of map/chain/bind/etc.
Consider you initially write your code that makes a bunch of API calls over HTTP. So you use a hypothetical async monad. If your language is smart enough to know which type is being used, you could have some code like
import { map as asyncMap }
declare const apiCall: Async<number>
asyncMap(n => n*2)(apiCall) // Async<number>
Now you change your API so it's reading a file and you make it synchronous instead:
import { map as syncMap }
declare const apiCall: Sync<number>
syncMap(n => n*2)(apiCall)
Look how you have to change multiple pieces of the code. Now imagine you have hundreds of files and tens of thousands of lines of code.
With a point-free style, you could do
import { map } from 'functor'
declare const apiCall: Async<number>
map(n => n*2)(apiCall)
and refactor to
import { map } from 'functor'
declare const apiCall: Sync<number>
map(n => n*2)(apiCall)
If you had a centralized location of your API calls, that would be the only place you're changing anything. Everything else is smart enough to recognize which functor you're using and apply map correctly.
As far as your concerns about name collisions, that's a concern that will exist no matter your language or design. But in functional programming, add would be a combinator that is your mapping function passed into your fmap (Haskell term) / map(lots of imperative/OO languages' term). The function you use to add a new element to the tail end of an array/list might be called snoc ("cons" from "construct" spelled backwards, where cons prepends an element to your array; snoc appends). You could also call it push or append.
As far as your one-vs-many issue, these are not the same type. One is a list/array type, and the other is an identity type. The underlying code treating them would be different as they are different functors (one contains a single element, while one contains multiple elements.
I suppose you could create a language that disallows single elements by automatically wrapping them as a single-element lists and then just uses the list map. But this seems like a lot of work to make two things that are very different look the same.
Instead, the approach where you wrap single elements to be identity and multiple elements to be a list/array, and then array and identity have their own under-the-hood handlers for the functor method map probably would be better.

How does Rust implement reflection?

Rust has the Any trait, but it also has a "do not pay for what you do not use" policy. How does Rust implement reflection?
My guess is that Rust uses lazy tagging. Every type is initially unassigned, but later if an instance of the type is passed to a function expecting an Any trait, the type is assigned a TypeId.
Or maybe Rust puts a TypeId on every type that its instance is possibly passed to that function? I guess the former would be expensive.
First of all, Rust doesn't have reflection; reflection implies you can get details about a type at runtime, like the fields, methods, interfaces it implements, etc. You can not do this with Rust. The closest you can get is explicitly implementing (or deriving) a trait that provides this information.
Each type gets a TypeId assigned to it at compile time. Because having globally ordered IDs is hard, the ID is an integer derived from a combination of the type's definition, and assorted metadata about the crate in which it's contained. To put it another way: they're not assigned in any sort of order, they're just hashes of the various bits of information that go into defining the type. [1]
If you look at the source for the Any trait, you'll see the single implementation for Any:
impl<T: 'static + ?Sized > Any for T {
fn get_type_id(&self) -> TypeId { TypeId::of::<T>() }
}
(The bounds can be informally reduced to "all types that aren't borrowed from something else".)
You can also find the definition of TypeId:
pub struct TypeId {
t: u64,
}
impl TypeId {
pub const fn of<T: ?Sized + 'static>() -> TypeId {
TypeId {
t: unsafe { intrinsics::type_id::<T>() },
}
}
}
intrinsics::type_id is an internal function recognised by the compiler that, given a type, returns its internal type ID. This call just gets replaced at compile time with the literal integer type ID; there's no actual call here. [2] That's how TypeId knows what a type's ID is. TypeId, then, is just a wrapper around this u64 to hide the implementation details from users. If you find it conceptually simpler, you can just think of a type's TypeId as being a constant 64-bit integer that the compiler just knows at compile time.
Any forwards to this from get_type_id, meaning that get_type_id is really just binding the trait method to the appropriate TypeId::of method. It's just there to ensure that if you have an Any, you can find out the original type's TypeId.
Now, Any is implemented for most types, but this doesn't mean that all those types actually have an Any implementation floating around in memory. What actually happens is that the compiler only generates the actual code for a type's Any implementation if someone writes code that requires it. [3] In other words, if you never use the Any implementation for a given type, the compiler will never generate it.
This is how Rust fulfills "do not pay for what do you not use": if you never pass a given type as &Any or Box<Any>, then the associated code is never generated and never takes up any space in your compiled binary.
[1]: Frustratingly, this means that a type's TypeId can change value depending on precisely how the library gets compiled, to the point that compiling it as a dependency (as opposed to as a standalone build) causes TypeIds to change.
[2]: Insofar as I am aware. I could be wrong about this, but I'd be really surprised if that's the case.
[3]: This is generally true of generics in Rust.

When is it a good idea to return a pointer to a struct?

I'm learning Go, and I'm a little confused about when to use pointers. Specifically, when returning a struct from a function, when is it appropriate to return the struct instance itself, and when is it appropriate to return a pointer to the struct?
Example code:
type Car struct {
make string
model string
}
func Whatever() {
var car Car
car := Car{"honda", "civic"}
// ...
return car
}
What are the situations where I would want to return a pointer, and where I would not want to? Is there a good rule of thumb?
There are two things you want to keep in mind, performance and API.
How is a Car used? Is it an object which has state? Is it a large struct? Unfortunately, it is impossible to answer when I have no idea what a Car is. Truthfully, the best way is to see what others do and copy them. Eventually, you get a feeling for this sort of thing. I will now describe three examples from the standard library and explain why I think they used what they did.
hash/crc32: The crc32.NewIEEE() function returns a pointer type (actually, an interface, but the underlying type is a pointer). An instance of a hash function has state. As you write information to a hash, it sums up the data so when you call the Sum() method, it will give you the state of that one instance.
time: The time.Date function returns a Time struct. Why? A time is a time. It has no state. It is like an integer where you can compare them, preform maths on them, etc. The API designer decided that a modification to a time would not change the current one but make a new one. As a user of the library, if I want the time one month from now, I would want a new time object, not to change the current one I have. A time is also only 3 words in length. In other words, it is small and there would be no performance gain in using a pointer.
math/big: big.NewInt() is an interesting one. We can pretty much agree that when you modify a big.Int, you will often want a new one. A big.Int has no internal state, so why is it a pointer? The answer is simply performance. The programmers realized that big ints are … big. Constantly allocating each time you do a mathematical operation may not be practical. So, they decided to use pointers and allow the programmer to decide when to allocate new space.
Have I answered your question? Probably not. It is a design decision and you need to figure it out on a case by case basis. I use the standard library as a guide when I am designing my own libraries. It really all comes down to judgement and how you expect client code to use your types.
Very losely, exceptions are likely to show up in specific circumstances:
Return a value when it is really small (no more than few words).
Return a pointer when the copying overhead would substantially hurt performance (size is a lot of words).
Often, when you want to mimic an object-oriented style, where you have an "object" that stores state and "methods" that can alter the object, then you would have a "constructor" function that returns a pointer to a struct (think of it as the "object reference" as in other OO languages). Mutator methods would have to be methods of the pointer-to-the-struct type instead of the struct type itself, in order to change the fields of the "object", so it's convenient to have a pointer to the struct instead of a struct value itself, so that all "methods" will be in its method set.
For example, to mimic something like this in Java:
class Car {
String make;
String model;
public Car(String myMake) { make = myMake; }
public setMake(String newMake) { make = newMake; }
}
You would often see something like this in Go:
type Car struct {
make string
model string
}
func NewCar(myMake string) *Car {
return &Car{myMake, ""}
}
func (self *Car) setMake(newMake string) {
self.make = newMake
}

Resources