Cannot compare &u8 with u8 - vector

fn count_spaces(text: Vec<u8>) -> usize {
text.split(|c| c == 32u8).count()
}
The above function does not compile, and gives the following error on the comparison:
trait `&u8: std::cmp::PartialEq` not satisfied
I read this as: "c is a borrowed byte and cannot be compared to a regular byte", but I must be reading this wrong.
What would be the appropriate way to split a Vec<u8> on specific values?
I do realize that there are options when reading files, like splitting a BufReader or I could convert the vector to a string and use str::split. I might go with such a solution (passing in a BufReader instead of a Vec<u8>), but right now I'm just playing around, testing stuff and want to know what I'm doing wrong.

The code
You are actually reading it right: c is indeed a borrowed byte and cannot be compared to a regular byte. Try using any of the functions below instead:
fn count_spaces(text: Vec<u8>) -> usize {
text.split(|&c| c == 32u8).count()
}
fn count_spaces(text: Vec<u8>) -> usize {
text.split(|c| *c == 32u8).count()
}
The first one uses pattern matching on the parameter (&c) to dereference it, while the second one uses the dereference operator (*).
Why is c a &u8 instead of a u8?
If you take a look at the split method on the docs, you will see that the closure parameter is a borrow of the data in Vec. In this case, it means that the parameter will be &u8 instead of u8 (so in your code you are actually comparing &u8 to u8, which Rust doesn't like).
In order to understand why the closure takes the parameter by borrow and not by value, consider what would happen if the parameter was taken by value. In the case of Vec<u8>, there would be no problem since u8 implements Copy. However, in the case of a a Vec<String>, each String would be moved into the closure and destroyed!

Related

How to invoke a multi-argument function without creating a closure?

I came across this while doing the 2018 Advent of Code (Day 2, Part 1) solution in Rust.
The problem to solve:
Take the count of strings that have exactly two of the same letter, multiplied by the count of strings that have exactly three of the same letter.
INPUT
abcdega
hihklmh
abqasbb
aaaabcd
The first string abcdega has a repeated twice.
The second string hihklmh has h repeated three times.
The third string abqasbb has a repeated twice, and b repeated three times, so it counts for both.
The fourth string aaaabcd contains a letter repeated 4 times (not 2, or 3) so it does not count.
So the result should be:
2 strings that contained a double letter (first and third) multiplied by 2 strings that contained a triple letter (second and third) = 4
The Question:
const PUZZLE_INPUT: &str =
"
abcdega
hihklmh
abqasbb
aaaabcd
";
fn letter_counts(id: &str) -> [u8;26] {
id.chars().map(|c| c as u8).fold([0;26], |mut counts, c| {
counts[usize::from(c - b'a')] += 1;
counts
})
}
fn has_repeated_letter(n: u8, letter_counts: &[u8;26]) -> bool {
letter_counts.iter().any(|&count| count == n)
}
fn main() {
let ids_iter = PUZZLE_INPUT.lines().map(letter_counts);
let num_ids_with_double = ids_iter.clone().filter(|id| has_repeated_letter(2, id)).count();
let num_ids_with_triple = ids_iter.filter(|id| has_repeated_letter(3, id)).count();
println!("{}", num_ids_with_double * num_ids_with_triple);
}
Rust Playground
Consider line 21. The function letter_counts takes only one argument, so I can use the syntax: .map(letter_counts) on elements that match the type of the expected argument. This is really nice to me! I love that I don't have to create a closure: .map(|id| letter_counts(id)). I find both to be readable, but the former version without the closure is much cleaner to me.
Now consider lines 22 and 23. Here, I have to use the syntax: .filter(|id| has_repeated_letter(3, id)) because the has_repeated_letter function takes two arguments. I would really like to do .filter(has_repeated_letter(3)) instead.
Sure, I could make the function take a tuple instead, map to a tuple and consume only a single argument... but that seems like a terrible solution. I'd rather just create the closure.
Leaving out the only argument is something that Rust lets you do. Why would it be any harder for the compiler to let you leave out the last argument, provided that it has all of the other n-1 arguments for a function that takes n arguments.
I feel like this would make the syntax a lot cleaner, and it would fit in a lot better with the idiomatic functional style that Rust prefers.
I am certainly no expert in compilers, but implementing this behavior seems like it would be straightforward. If my thinking is incorrect, I would love to know more about why that is so.
No, you cannot pass a function with multiple arguments as an implicit closure.
In certain cases, you can choose to use currying to reduce the arity of a function. For example, here we reduce the add function from 2 arguments to one:
fn add(a: i32, b: i32) -> i32 {
a + b
}
fn curry<A1, A2, R>(f: impl FnOnce(A1, A2) -> R, a1: A1) -> impl FnOnce(A2) -> R {
move |a2| f(a1, a2)
}
fn main() {
let a = Some(1);
a.map(curry(add, 2));
}
However, I agree with the comments that this isn't a benefit:
It's not any less typing:
a.map(curry(add, 2));
a.map(|v| add(v, 2));
The curry function is extremely limited: it chooses to use FnOnce, but Fn and FnMut also have use cases. It only applies to a function with two arguments.
However, I have used this higher-order function trick in other projects, where the amount of code that is added is much greater.

How to chain tokio read functions?

Is there a way to chain the read_* functions in tokio::io in a "recursive" way ?
I'm essentially looking to do something like:
read_until x then read_exact y then write response then go back to the top.
In case you are confused what functions i'm talking about: https://docs.rs/tokio/0.1.11/tokio/io/index.html
Yes, there is a way.
read_until is returns a struct ReadUntil, which implements the Future-trait, which iteself provides a lot of useful functions, e.g. and_then which can be used to chain futures.
A simple (and silly) example looks like this:
extern crate futures;
extern crate tokio_io; // 0.1.8 // 0.1.24
use futures::future::Future;
use std::io::Cursor;
use tokio_io::io::{read_exact, read_until};
fn main() {
let cursor = Cursor::new(b"abcdef\ngh");
let mut buf = vec![0u8; 2];
println!(
"{:?}",
String::from_utf8_lossy(
read_until(cursor, b'\n', vec![])
.and_then(|r| read_exact(r.0, &mut buf))
.wait()
.unwrap()
.1
)
);
}
Here I use a Cursor, which happens to implement the AsyncRead-trait and use the read_until function to read until a newline occurs (between 'f' and 'g').
Afterwards to chain those I use and_then to use read_exact in case of an success, use wait to get the Result unwrap it (don't do this in production kids!) and take the second argument from the tuple (the first one is the cursor).
Last I convert the Vec into a String to display "gh" with println!.

Why does a &[T] argument also accept &Vec<T>?

I am working through the Rust book, namely the minigrep project. There I came across the following snippet:
fn main() {
let args: Vec<String> = env::args().collect();
let (query, filename) = parse_config(&args);
// --snip--
}
fn parse_config(args: &[String]) -> (&str, &str) {
let query = &args[1];
let filename = &args[2];
(query, filename)
}
The confusing piece for me is args: &[String]. If I replace it with args: &Vec<String>, it also works. My guess is that &[String] is a more general type annotation that matches not only &Vec<String>, but also some other types. Is that correct? If so, what other types are matched by [T]?
Generally speaking, [T] is a contiguous sequence and &[T] is a slice.
The reason why the compiler allows &[String] instead of &Vec<String> is that Vec<T> dereferences to [T]. This is called Deref coercion. It can be said that the former notation (in function parameters) is more general; it is also the preferred one. Further details about automatic dereferencing rules can be found in this question.

Concatenate a vector of vectors of strings

I'm trying to write a function that receives a vector of vectors of strings and returns all vectors concatenated together, i.e. it returns a vector of strings.
The best I could do so far has been the following:
fn concat_vecs(vecs: Vec<Vec<String>>) -> Vec<String> {
let vals : Vec<&String> = vecs.iter().flat_map(|x| x.into_iter()).collect();
vals.into_iter().map(|v: &String| v.to_owned()).collect()
}
However, I'm not happy with this result, because it seems I should be able to get Vec<String> from the first collect call, but somehow I am not able to figure out how to do it.
I am even more interested to figure out why exactly the return type of collect is Vec<&String>. I tried to deduce this from the API documentation and the source code, but despite my best efforts, I couldn't even understand the signatures of functions.
So let me try and trace the types of each expression:
- vecs.iter(): Iter<T=Vec<String>, Item=Vec<String>>
- vecs.iter().flat_map(): FlatMap<I=Iter<Vec<String>>, U=???, F=FnMut(Vec<String>) -> U, Item=U>
- vecs.iter().flat_map().collect(): (B=??? : FromIterator<U>)
- vals was declared as Vec<&String>, therefore
vals == vecs.iter().flat_map().collect(): (B=Vec<&String> : FromIterator<U>). Therefore U=&String.
I'm assuming above that the type inferencer is able to figure out that U=&String based on the type of vals. But if I give the expression the explicit types in the code, this compiles without error:
fn concat_vecs(vecs: Vec<Vec<String>>) -> Vec<String> {
let a: Iter<Vec<String>> = vecs.iter();
let b: FlatMap<Iter<Vec<String>>, Iter<String>, _> = a.flat_map(|x| x.into_iter());
let c = b.collect();
print_type_of(&c);
let vals : Vec<&String> = c;
vals.into_iter().map(|v: &String| v.to_owned()).collect()
}
Clearly, U=Iter<String>... Please help me clear up this mess.
EDIT: thanks to bluss' hint, I was able to achieve one collect as follows:
fn concat_vecs(vecs: Vec<Vec<String>>) -> Vec<String> {
vecs.into_iter().flat_map(|x| x.into_iter()).collect()
}
My understanding is that by using into_iter I transfer ownership of vecs to IntoIter and further down the call chain, which allows me to avoid copying the data inside the lambda call and therefore - magically - the type system gives me Vec<String> where it used to always give me Vec<&String> before. While it is certainly very cool to see how the high-level concept is reflected in the workings of the library, I wish I had any idea how this is achieved.
EDIT 2: After a laborious process of guesswork, looking at API docs and using this method to decipher the types, I got them fully annotated (disregarding the lifetimes):
fn concat_vecs(vecs: Vec<Vec<String>>) -> Vec<String> {
let a: Iter<Vec<String>> = vecs.iter();
let f : &Fn(&Vec<String>) -> Iter<String> = &|x: &Vec<String>| x.into_iter();
let b: FlatMap<Iter<Vec<String>>, Iter<String>, &Fn(&Vec<String>) -> Iter<String>> = a.flat_map(f);
let vals : Vec<&String> = b.collect();
vals.into_iter().map(|v: &String| v.to_owned()).collect()
}
I'd think about: why do you use iter() on the outer vec but into_iter() on the inner vecs? Using into_iter() is actually crucial, so that we don't have to copy first the inner vectors, then the strings inside, we just receive ownership of them.
We can actually write this just like a summation: concatenate the vectors two by two. Since we always reuse the allocation & contents of the same accumulation vector, this operation is linear time.
To minimize time spent growing and reallocating the vector, calculate the space needed up front.
fn concat_vecs(vecs: Vec<Vec<String>>) -> Vec<String> {
let size = vecs.iter().fold(0, |a, b| a + b.len());
vecs.into_iter().fold(Vec::with_capacity(size), |mut acc, v| {
acc.extend(v); acc
})
}
If you do want to clone all the contents, there's already a method for that, and you'd just use vecs.concat() /* -> Vec<String> */
The approach with .flat_map is fine, but if you don't want to clone the strings again you have to use .into_iter() on all levels: (x is Vec<String>).
vecs.into_iter().flat_map(|x| x.into_iter()).collect()
If instead you want to clone each string you can use this: (Changed .into_iter() to .iter() since x here is a &Vec<String> and both methods actually result in the same thing!)
vecs.iter().flat_map(|x| x.iter().map(Clone::clone)).collect()

Why does asserting on the result of Deref::deref fail with a type mismatch?

The following is the Deref example from The Rust Programming Language except I've added another assertion.
Why does the assert_eq with the deref also equal 'a'? Why do I need a * once I've manually called deref?
use std::ops::Deref;
struct DerefExample<T> {
value: T,
}
impl<T> Deref for DerefExample<T> {
type Target = T;
fn deref(&self) -> &T {
&self.value
}
}
fn main() {
let x = DerefExample { value: 'a' };
assert_eq!('a', *x.deref()); // this is true
// assert_eq!('a', x.deref()); // this is a compile error
assert_eq!('a', *x); // this is also true
println!("ok");
}
If I uncomment the line, I get this error:
error[E0308]: mismatched types
--> src/main.rs:18:5
|
18 | assert_eq!('a', x.deref());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected char, found &char
|
= note: expected type `char`
found type `&char`
= help: here are some functions which might fulfill your needs:
- .to_ascii_lowercase()
- .to_ascii_uppercase()
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
First, let's spell out the generic types for your specific example: 'a' is char, so we have:
impl Deref for DerefExample<char> {
type Target = char;
fn deref(&self) -> &char {
&self.value
}
}
Notably, the return type of deref is a reference to a char. Thus it shouldn't be surprising that, when you use just x.deref(), the result is a &char rather than a char. Remember, at that point deref is just another normal method — it's just implicitly invoked as part of some language-provided special syntax. *x, for example, will call deref and dereference the result, when applicable. x.char_method() and fn_taking_char(&x) will also call deref some number of times and then do something further with the result.
Why does deref return a reference to begin with, you ask? Isn't that circular? Well, no, it isn't circular: it reduces library-defined smart pointers to the built-in type &T which the compiler already knows how to dereference. By returning a reference instead of a value, you avoid a copy/move (which may not always be possible!) and allow &*x (or &x when it's coerced) to refer to the actual char that DerefExample holds rather than a temporary copy.
See also:
Why is the return type of Deref::deref itself a reference?

Resources