#![feature(ptr_internals)]
use core::ptr::Unique;
struct PtrWrapper {
id: usize,
self_reference: Unique<Self>
}
impl PtrWrapper {
fn new() -> Self {
let dummy = unsafe {Unique::new_unchecked(std::ptr::null_mut::<PtrWrapper>())};
let mut ret = Self {id:0, self_reference: dummy };
let new_ptr = &mut ret as *mut Self;
debug_print(new_ptr);
ret.self_reference = Unique::new(new_ptr).unwrap();
debug_print(ret.self_reference.as_ptr());
ret
}
fn get_id(&self) -> usize {
self.id.clone()
}
}
fn main() {
println!("START");
let mut wrapper = PtrWrapper::new();
wrapper.id = 10;
let ptr = wrapper.self_reference.as_ptr();
unsafe {
(*ptr).id += 30;
println!("The next print isn't 40? Garbage bytes");
debug_print(ptr);
let tmp = &mut wrapper as *mut PtrWrapper;
(*tmp).id += 500;
println!("The next print isn't 540?");
debug_print(tmp);
}
println!("Below debug_print is proof of undefined behavior! Garbage bytes\n");
debug_print(wrapper.self_reference.as_ptr());
debug_print(&mut wrapper as *mut PtrWrapper);
debug_print_move(wrapper);
println!("Why is the assertion below false?");
assert_eq!(unsafe{(*ptr).id}, 540);
}
fn debug_print_move(mut wrapper: PtrWrapper) {
debug_print(&mut wrapper as *mut PtrWrapper);
}
fn debug_print(ptr: *mut PtrWrapper) {
println!("Address: {:p}", ptr);
println!("ID: {}\n", unsafe {(*ptr).get_id()});
}
The above code should compile fine in rust playground with a nightly selected version. Pay attention to the console outputs.
My question is: Why are the intermittent results not equal to the value I expect them to equal? In the case below, there is no multiple access simultaneously (single threaded), so there aren't any data races. There are, however, implicitly multiple mutable version of the object existing on the stack.
As expected, the memory location of the pointer changes with the tmp variable as well as when the entire object is moved into debug_print_move. It appears that using the tmp pointer works as expected (i.e., adds 500), however, the pointers which are obtained from the Unique<PtrWrapper> object seems to point to irrelevant locations in memory.
As Stargateur recommended, in order to solve this problem we need to Pin the object which needs to be self-referential. I ended up using:
pin-api = "0.2.1"
In cargo.toml instead of std::pin::pin. Next, I set this up the struct and its implementation:
#![feature(ptr_internals, pin_into_inner, optin_builtin_traits)]
// not available on rust-playground
extern crate pin_api;
use pin_api::{boxed::PinBox, marker::Unpin, mem::Pin};
///test
pub struct PtrWrapper<T>
where
T: std::fmt::Debug,
{
///tmp
pub obj: T,
/// pinned object
pub self_reference: *mut Self,
}
impl<T> !Unpin for PtrWrapper<T> where T: std::fmt::Debug {}
impl<T> PtrWrapper<T>
where
T: std::fmt::Debug,
{
///test
pub fn new(obj: T) -> Self {
Self {
obj,
self_reference: std::ptr::null_mut(),
}
}
///test
pub fn init(mut self: Pin<PtrWrapper<T>>) {
let mut this: &mut PtrWrapper<T> = unsafe { Pin::get_mut(&mut self) };
this.self_reference = this as *mut Self;
}
/// Debug print
pub fn print_obj(&self) {
println!("Obj value: {:#?}", self.obj);
}
}
Finally, the test function:
fn main2() {
unsafe {
println!("START");
let mut wrapper = PinBox::new(PtrWrapper::new(10));
wrapper.as_pin().init();
let m = wrapper.as_pin().self_reference;
(*m).obj += 30;
println!("The next print is 40");
debug_print(m);
let tmp = wrapper.as_pin().self_reference;
(*tmp).obj += 500;
println!("The next print is 540?");
debug_print(tmp);
debug_print(wrapper.self_reference);
let cpy = PinBox::get_mut(&mut wrapper);
debug_print_move(cpy);
std::mem::drop(wrapper);
println!("Works!");
assert_eq!(unsafe { (*m).obj }, 540);
}
}
fn debug_print_move<T>(mut wrapper: &mut PtrWrapper<T>)
where
T: std::fmt::Debug,
{
debug_print(&mut *wrapper as *mut PtrWrapper<T>);
}
fn debug_print<T>(ptr: *mut PtrWrapper<T>)
where
T: std::fmt::Debug,
{
println!("Address: {:p}", ptr);
unsafe { (*ptr).print_obj() };
}
On a side note, pin-api does not exist on rust playground. You could still use std::pin::Pin, however it would require further customization.
Related
When I try the code below for the Vec<Ev> I get a [E0308]: mismatched type error.
use std::fmt::Error;
#[derive(Debug)]
struct Ev {
semt: String,
fiyat : i32,
}
impl Ev {
fn yeni (alan: &str,fiyat: i32) -> Ev {
Self {
semt: alan.to_string(),
fiyat
}
}
}
fn dizi_yap(boyut:usize) -> Result<Vec<Ev>,Error> {
let mut evler = Vec::<Ev>::with_capacity(boyut);
evler.push(Ev::yeni("melikgazi", 210));
evler.push(Ev::yeni("kocasinan", 120));
evler.push(Ev::yeni("hacılar", 410));
evler.push(Ev::yeni("bünyan", 90));
Ok(evler)
}
fn elemani_getir(&mut dizi:Vec<Ev>, sira:usize) -> Ev {
dizi[sira]
// dizi.get(sira).expect("hata")
}
fn main() {
let mut dizi = dizi_yap(1).expect("ulasmadi");
println!("eleman: {:?}",dizi[3]);
println!("eleman: {:?}",elemani_getir(dizi, 3))
}
How can I get Vec indexed item in this example?
The syntax in you function arguments is a little off. Mutable arguments can be a little confusing, as there are two different representations. Refer to this question for a more detailed explanation.
Here is the elemali_getit function corrected:
fn elemani_getir(mut dizi: &Vec<Ev>, sira: usize) -> &Ev {
&dizi[sira]
}
And you can call it like this:
println!("eleman: {:?}", elemani_getir(&dizi, 3))
Note that elemani_getir now returns a reference to Ev (&Ev). Returning Ev instead results in an error:
cannot move out of index of `std::vec::Vec<Ev>`
To get around this error, you can either return a reference to Ev as shown above, or return an exact duplicated of Ev my deriving the Clone trait:
#[derive(Debug, Clone)]
struct Ev {
semt: String,
fiyat: i32,
}
fn elemani_getir(mut dizi: &Vec<Ev>, sira: usize) -> Ev {
dizi[sira].clone()
}
I want to lazily consume the nodes of a file tree one by one while sorting the siblings on each level.
In Python, I'd use a synchronous generator:
def traverse_dst(src_dir, dst_root, dst_step):
"""
Recursively traverses the source directory and yields a sequence of (src, dst) pairs;
"""
dirs, files = list_dir_groom(src_dir) # Getting immediate offspring.
for d in dirs:
step = list(dst_step)
step.append(d.name)
yield from traverse_dst(d, dst_root, step)
for f in files:
dst_path = dst_root.joinpath(step)
yield f, dst_path
In Elixir, a (lazy) stream:
def traverse_flat_dst(src_dir, dst_root, dst_step \\ []) do
{dirs, files} = list_dir_groom(src_dir) # Getting immediate offspring.
traverse = fn d ->
step = dst_step ++ [Path.basename(d)]
traverse_flat_dst(d, dst_root, step)
end
handle = fn f ->
dst_path =
Path.join(
dst_root,
dst_step
)
{f, dst_path}
end
Stream.flat_map(dirs, traverse)
|> Stream.concat(Stream.map(files, handle))
end
One can see some language features addressing recursion: yield from in Python, flat_map in Elixir; the latter looks like a classic functional approach.
It looks like whatever is lazy in Rust, it's always an iterator. How am I supposed to do more or less the same in Rust?
I'd like to preserve the structure of my recursive function with dirs and files as vectors of paths (they are optionally sorted and filtered).
Getting dirs and files is already implemented to my liking:
fn folders(dir: &Path, folder: bool) -> Result<Vec<PathBuf>, io::Error> {
Ok(fs::read_dir(dir)?
.into_iter()
.filter(|r| r.is_ok())
.map(|r| r.unwrap().path())
.filter(|r| if folder { r.is_dir() } else { !r.is_dir() })
.collect())
}
fn list_dir_groom(dir: &Path) -> (Vec<PathBuf>, Vec<PathBuf>) {
let mut dirs = folders(dir, true).unwrap();
let mut files = folders(dir, false).unwrap();
if flag("x") {
dirs.sort_unstable();
files.sort_unstable();
} else {
sort_path_slice(&mut dirs);
sort_path_slice(&mut files);
}
if flag("r") {
dirs.reverse();
files.reverse();
}
(dirs, files)
}
Vec<PathBuf can be iterated as is, and there is standard flat_map method. It should be possible to implement Elixir style, I just can't figure it out yet.
This is what I already have. Really working (traverse_flat_dst(&SRC, [].to_vec());), I mean:
fn traverse_flat_dst(src_dir: &PathBuf, dst_step: Vec<PathBuf>) {
let (dirs, files) = list_dir_groom(src_dir);
for d in dirs.iter() {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
println!("d: {:?}; step: {:?}", d, step);
traverse_flat_dst(d, step);
}
for f in files.iter() {
println!("f: {:?}", f);
}
}
What I want (not yet working!):
fn traverse_flat_dst_iter(src_dir: &PathBuf, dst_step: Vec<PathBuf>) {
let (dirs, files) = list_dir_groom(src_dir);
let traverse = |d| {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
traverse_flat_dst_iter(d, step);
};
// This is something that I just wish to be true!
flat_map(dirs, traverse) + map(files)
}
I want this function to deliver one long flat iterator of files, in the spirit of the Elixir solution. I just can't yet cope with the necessary return types and other syntax. I really hope to be clear enough this time.
What I managed to compile and run (meaningless, but the signature is what I actually want):
fn traverse_flat_dst_iter(
src_dir: &PathBuf,
dst_step: Vec<PathBuf>,
) -> impl Iterator<Item = (PathBuf, PathBuf)> {
let (dirs, files) = list_dir_groom(src_dir);
let _traverse = |d: &PathBuf| {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
traverse_flat_dst_iter(d, step)
};
files.into_iter().map(|f| (f, PathBuf::new()))
}
What I'm still lacking:
fn traverse_flat_dst_iter(
src_dir: &PathBuf,
dst_step: Vec<PathBuf>,
) -> impl Iterator<Item = (PathBuf, PathBuf)> {
let (dirs, files) = list_dir_groom(src_dir);
let traverse = |d: &PathBuf| {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
traverse_flat_dst_iter(d, step)
};
// Here is a combination amounting to an iterator,
// which delivers a (PathBuf, PathBuf) tuple on each step.
// Flat mapping with traverse, of course (see Elixir solution).
// Iterator must be as long as the number of files in the tree.
// The lines below look very close, but every possible type is mismatched :(
dirs.into_iter().flat_map(traverse)
.chain(files.into_iter().map(|f| (f, PathBuf::new())))
}
There are two approaches:
The first one is to use an existing crate, like walkdir. The benefit is it's being well tested and offers many options.
The second one is to write your own implementation of Iterator. Here's an example, and maybe the basis for your own:
struct FileIterator {
dirs: Vec<PathBuf>, // the dirs waiting to be read
files: Option<ReadDir>, // non recursive iterator over the currently read dir
}
impl From<&str> for FileIterator {
fn from(path: &str) -> Self {
FileIterator {
dirs: vec![PathBuf::from(path)],
files: None,
}
}
}
impl Iterator for FileIterator {
type Item = PathBuf;
fn next(&mut self) -> Option<PathBuf> {
loop {
while let Some(read_dir) = &mut self.files {
match read_dir.next() {
Some(Ok(entry)) => {
let path = entry.path();
if let Ok(md) = entry.metadata() {
if md.is_dir() {
self.dirs.push(path.clone());
continue;
}
}
return Some(path);
}
None => { // we consumed this directory
self.files = None;
break;
}
_ => { }
}
}
while let Some(dir) = self.dirs.pop() {
let read_dir = fs::read_dir(&dir);
if let Ok(files) = read_dir {
self.files = Some(files);
return Some(dir);
}
}
break; // no more files, no more dirs
}
return None;
}
}
playground
The advantage of writing your own iterator is that you'll tune it for your precise needs (sorting, filtering, error handling, etc.). But you'll have to deal with your own bugs.
This is the exact solution I sought. It's none of my achievement; see here. Comments are welcome.
fn traverse_flat_dst_iter(
src_dir: &PathBuf,
dst_step: Vec<PathBuf>,
) -> impl Iterator<Item = (PathBuf, PathBuf)> {
let (dirs, files) = list_dir_groom(src_dir);
let traverse = move |d: PathBuf| -> Box<dyn Iterator<Item = (PathBuf, PathBuf)>> {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
Box::new(traverse_flat_dst_iter(&d, step))
};
dirs.into_iter()
.flat_map(traverse)
.chain(files.into_iter().map(|f| (f, PathBuf::new())))
}
Another, more sophisticated take. One has to box things, clone parameters to be shared between lambdas, etc., to satisfy the compiler. Yet it works. Hopefully, on can get the hang of the thing.
fn traverse_dir(
src_dir: &PathBuf,
dst_step: Vec<PathBuf>,
) -> Box<dyn Iterator<Item = (PathBuf, Vec<PathBuf>)>> {
let (dirs, files) = groom(src_dir);
let destination_step = dst_step.clone(); // A clone for handle.
let traverse = move |d: PathBuf| {
let mut step = dst_step.clone();
step.push(PathBuf::from(d.file_name().unwrap()));
traverse_dir(&d, step)
};
let handle = move |f: PathBuf| (f, destination_step.clone());
if flag("r") {
// Chaining backwards.
Box::new(
files
.into_iter()
.map(handle)
.chain(dirs.into_iter().flat_map(traverse)),
)
} else {
Box::new(
dirs.into_iter()
.flat_map(traverse)
.chain(files.into_iter().map(handle)),
)
}
}
I have a function that takes an argument of type u16. Is there an elegant way to define a custom data type that behaves exactly like a u16 but only has values between 0 and 100?
As I understand it, that requires dependent types, which Rust does not have. This doesn't require dependent types (see comments) but Rust still doesn't have the support needed.
As a workaround, you could create a newtype that you verify yourself:
#[derive(Debug)]
struct Age(u16);
impl Age {
fn new(age: u16) -> Option<Age> {
if age <= 100 {
Some(Age(age))
} else {
None
}
}
}
fn main() {
let age1 = Age::new(30);
let age2 = Age::new(500);
println!("{:?}, {:?}", age1, age2);
assert_eq!(
std::mem::size_of::<Age>(),
std::mem::size_of::<u16>()
);
}
Of course, it doesn't behave exactly like a u16, but you don't want it to, either! For example, a u16 can go beyond 100... You'd have to reason out if it makes sense to add/subtract/multiply/divide etc your new type as well.
For maximum safeguarding, you should move your type and any associated functions into a module. This leverages Rust's visibility rules to prevent people from accidentally accessing the value inside the newtype and invalidating the constraints.
You may also want to implement TryFrom (from u16 to your type) or From (from your type to u16) to better integrate with generic code.
An important thing to note is that this newtype takes the same amount of space as a u16 - the wrapper type is effectively erased when the code is compiled. The type checker makes sure everything meshes before that point.
Unfortunately, there is no such a thing inside the std crate.
However, you can do it yourself in an optimized manner with the nightly generic consts, scheduled to be stabilized in Rust 1.51. Example:
// 1.51.0-nightly (2020-12-30)
pub struct BoundedI32<const LOW: i32, const HIGH: i32>(i32);
impl<const LOW: i32, const HIGH: i32> BoundedI32<{ LOW }, { HIGH }> {
pub const LOW: i32 = LOW;
pub const HIGH: i32 = HIGH;
pub fn new(n: i32) -> Self {
BoundedI32(n.min(Self::HIGH).max(Self::LOW))
}
pub fn fallible_new(n: i32) -> Result<Self, &'static str> {
match n {
n if n < Self::LOW => Err("Value too low"),
n if n > Self::HIGH => Err("Value too high"),
n => Ok(BoundedI32(n)),
}
}
pub fn set(&mut self, n: i32) {
*self = BoundedI32(n.min(Self::HIGH).max(Self::LOW))
}
}
impl<const LOW: i32, const HIGH: i32> std::ops::Deref for BoundedI32<{ LOW }, { HIGH }> {
type Target = i32;
fn deref(&self) -> &Self::Target {
&self.0
}
}
fn main() {
let dice = BoundedI32::<1, 6>::fallible_new(0);
assert!(dice.is_err());
let mut dice = BoundedI32::<1, 6>::new(0);
assert_eq!(*dice, 1);
dice.set(123);
assert_eq!(*dice, 6);
}
And then you can implement the maths, etc.
If you want to chose the bound at runtime, you don't need this feature, and you just need to do something like that:
pub struct BoundedI32 {
n: i32,
low: i32,
high: i32,
}
You can also use a crate like bounded-integer that allows to generate a bounded integer on-the-fly with a macro.
With the nightly feature generic_const_exprs, it is possible to verify this at compile time:
#![feature(generic_const_exprs)]
struct If<const COND: bool>;
trait True {}
impl True for If<true> {}
const fn in_bounds(n: usize, low: usize, high: usize) -> bool {
n > low && n < high
}
struct BoundedInteger<const LOW: usize, const HIGH: usize>(usize);
impl<const LOW: usize, const HIGH: usize> BoundedInteger<LOW, HIGH>
where
If<{ LOW < HIGH }>: True,
{
fn new<const N: usize>() -> Self
where
If<{ in_bounds(N, LOW, HIGH) }>: True,
{
Self(N)
}
}
The error messages aren't the best, but it works!
fn main() {
let a = BoundedInteger::<1, 10>::new::<5>();
let b = BoundedInteger::<10, 1>::new::<5>(); // ERROR: doesn't satisfy `If<{ LOW < HIGH }>: True`
let c = BoundedInteger::<2, 5>::new::<6>(); // ERROR: expected `false`, found `true`
}
Not exactly, to my knowledge. But you can use a trait to get close. Example, where tonnage is a unsigned 8 bit integer that is expected to be 20-100 and a multiple of 5:
pub trait Validator{
fn isvalid(&self) -> bool;
}
pub struct TotalRobotTonnage{
pub tonnage: u8,
}
impl Validator for TotalRobotTonnage{
//is in range 20-100 and a multiple of 5
fn isvalid(&self) -> bool{
if self.tonnage < 20 || self.tonnage > 100 || self.tonnage % 5 != 0{
false
}else{
true
}
}
}
fn main() {
let validtonnage = TotalRobotTonnage{tonnage: 100};
let invalidtonnage_outofrange = TotalRobotTonnage{tonnage: 10};
let invalidtonnage_notmultipleof5 = TotalRobotTonnage{tonnage: 21};
println!("value {} [{}] value {} [{}] value {} [{}]",
validtonnage.tonnage,
validtonnage.isvalid(),
invalidtonnage_outofrange.tonnage,
invalidtonnage_outofrange.isvalid(),
invalidtonnage_notmultipleof5.tonnage,
invalidtonnage_notmultipleof5.isvalid()
);
}
For example:
struct Foo<'a> { bar: &'a str }
fn main() {
let foo_instance = Foo { bar: "bar" };
let some_vector: Vec<&Foo> = vec![&foo_instance];
assert!(*some_vector[0] == foo_instance);
}
I want to check if foo_instance references the same instance as *some_vector[0], but I can't do this ...
I don't want to know if the two instances are equal; I want to check if the variables point to the same instance in the memory
Is it possible to do that?
There is the function ptr::eq:
use std::ptr;
struct Foo<'a> {
bar: &'a str,
}
fn main() {
let foo_instance = Foo { bar: "bar" };
let some_vector: Vec<&Foo> = vec![&foo_instance];
assert!(ptr::eq(some_vector[0], &foo_instance));
}
Before this was stabilized in Rust 1.17.0, you could perform a cast to *const T:
assert!(some_vector[0] as *const Foo == &foo_instance as *const Foo);
It will check if the references point to the same place in the memory.
I need to convert &[u8] to a hex representation. For example [ A9, 45, FF, 00 ... ].
The trait std::fmt::UpperHex is not implemented for slices (so I can't use std::fmt::format). Rust has the serialize::hex::ToHex trait, which converts &[u8] to a hex String, but I need a representation with separate bytes.
I can implement trait UpperHex for &[u8] myself, but I'm not sure how canonical this would be. What is the most canonical way to do this?
Rust 1.26.0 and up
The :x? "debug with hexadecimal integers" formatter can be used:
let data = b"hello";
// lower case
println!("{:x?}", data);
// upper case
println!("{:X?}", data);
let data = [0x0, 0x1, 0xe, 0xf, 0xff];
// print the leading zero
println!("{:02X?}", data);
// It can be combined with the pretty modifier as well
println!("{:#04X?}", data);
Output:
[68, 65, 6c, 6c, 6f]
[68, 65, 6C, 6C, 6F]
[00, 01, 0E, 0F, FF]
[
0x00,
0x01,
0x0E,
0x0F,
0xFF,
]
If you need more control or need to support older versions of Rust, keep reading.
Rust 1.0 and up
use std::fmt::Write;
fn main() {
let mut s = String::new();
for &byte in "Hello".as_bytes() {
write!(&mut s, "{:X} ", byte).expect("Unable to write");
}
println!("{}", s);
}
This can be fancied up by implementing one of the formatting traits (fmt::Debug, fmt::Display, fmt::LowerHex, fmt::UpperHex, etc.) on a wrapper struct and having a little constructor:
use std::fmt;
struct HexSlice<'a>(&'a [u8]);
impl<'a> HexSlice<'a> {
fn new<T>(data: &'a T) -> HexSlice<'a>
where
T: ?Sized + AsRef<[u8]> + 'a,
{
HexSlice(data.as_ref())
}
}
// You can choose to implement multiple traits, like Lower and UpperHex
impl fmt::Display for HexSlice<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
for byte in self.0 {
// Decide if you want to pad the value or have spaces inbetween, etc.
write!(f, "{:X} ", byte)?;
}
Ok(())
}
}
fn main() {
// To get a `String`
let s = format!("{}", HexSlice::new("Hello"));
// Or print it directly
println!("{}", HexSlice::new("world"));
// Works with
HexSlice::new("Hello"); // string slices (&str)
HexSlice::new(b"Hello"); // byte slices (&[u8])
HexSlice::new(&"World".to_string()); // References to String
HexSlice::new(&vec![0x00, 0x01]); // References to Vec<u8>
}
You can be even fancier and create an extension trait:
trait HexDisplayExt {
fn hex_display(&self) -> HexSlice<'_>;
}
impl<T> HexDisplayExt for T
where
T: ?Sized + AsRef<[u8]>,
{
fn hex_display(&self) -> HexSlice<'_> {
HexSlice::new(self)
}
}
fn main() {
println!("{}", "world".hex_display());
}
use hex::encode:
let a: [u8;4] = [1, 3, 3, 7];
assert_eq!(hex::encode(&a), "01030307");
[dependencies]
hex = "0.4"
Since the accepted answer doesn't work on Rust 1.0 stable, here's my attempt. Should be allocationless and thus reasonably fast. This is basically a formatter for [u8], but because of the coherence rules, we must wrap [u8] to a self-defined type ByteBuf(&[u8]) to use it:
struct ByteBuf<'a>(&'a [u8]);
impl<'a> std::fmt::LowerHex for ByteBuf<'a> {
fn fmt(&self, fmtr: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> {
for byte in self.0 {
try!( fmtr.write_fmt(format_args!("{:02x}", byte)));
}
Ok(())
}
}
Usage:
let buff = [0_u8; 24];
println!("{:x}", ByteBuf(&buff));
There's a crate for this: hex-slice.
For example:
extern crate hex_slice;
use hex_slice::AsHex;
fn main() {
let foo = vec![0u32, 1, 2 ,3];
println!("{:02x}", foo.as_hex());
}
I'm doing it this way:
let bytes : Vec<u8> = "привет".to_string().as_bytes().to_vec();
let hex : String = bytes.iter()
.map(|b| format!("{:02x}", b).to_string())
.collect::<Vec<String>>()
.join(" ");