How to replace portion of a vector using Rust? - vector

What is the best way to replace a specific portion of a vector with a new vector?
As of now, I am using hardcoded code to replace the vector. What is the most effective way to achieve this?
fn main() {
let mut v = vec![1, 2, 3, 4, 5, 6, 7, 8, 9];
let u = vec![0,0,0,0];
v[2] = u[0];
v[3] = u[1];
v[4] = u[2];
v[5] = u[3];
println!("v = {:?}", v);
}
Permalink to the playground
Is there any function to replace the vector with given indices?

For Copy types:
v[2..][..u.len()].copy_from_slice(&u);
Playground.
For non-Copy types:
v.splice(2..2 + u.len(), u);
Playground.

Another way:
let offset : usize = 2;
u.iter().enumerate().for_each(|(index, &val)| {
v[index + offset] = val;
});
Playground

Related

How to bulk insert into a vector in rust? [duplicate]

Is there any straightforward way to insert or replace multiple elements from &[T] and/or Vec<T> in the middle or at the beginning of a Vec in linear time?
I could only find std::vec::Vec::insert, but that's only for inserting a single element in O(n) time, so I obviously cannot call that in a loop.
I could do a split_off at that index, extend the new elements into the left half of the split, and then extend the second half into the first, but is there a better way?
As of Rust 1.21.0, Vec::splice is available and allows inserting at any point, including fully prepending:
let mut vec = vec![1, 5];
let slice = &[2, 3, 4];
vec.splice(1..1, slice.iter().cloned());
println!("{:?}", vec); // [1, 2, 3, 4, 5]
The docs state:
Note 4: This is optimal if:
The tail (elements in the vector after range) is empty
or replace_with yields fewer elements than range’s length
or the lower bound of its size_hint() is exact.
In this case, the lower bound of the slice's iterator should be exact, so it should perform one memory move.
splice is a bit more powerful in that it allows you to remove a range of values (the first argument), insert new values (the second argument), and optionally get the old values (the result of the call).
Replacing a set of items
let mut vec = vec![0, 1, 5];
let slice = &[2, 3, 4];
vec.splice(..2, slice.iter().cloned());
println!("{:?}", vec); // [2, 3, 4, 5]
Getting the previous values
let mut vec = vec![0, 1, 2, 3, 4];
let slice = &[9, 8, 7];
let old: Vec<_> = vec.splice(3.., slice.iter().cloned()).collect();
println!("{:?}", vec); // [0, 1, 2, 9, 8, 7]
println!("{:?}", old); // [3, 4]
Okay, there is no appropriate method in Vec interface (as I can see). But we can always implement the same thing ourselves.
memmove
When T is Copy, probably the most obvious way is to move the memory, like this:
fn push_all_at<T>(v: &mut Vec<T>, offset: usize, s: &[T]) where T: Copy {
match (v.len(), s.len()) {
(_, 0) => (),
(current_len, _) => {
v.reserve_exact(s.len());
unsafe {
v.set_len(current_len + s.len());
let to_move = current_len - offset;
let src = v.as_mut_ptr().offset(offset as isize);
if to_move > 0 {
let dst = src.offset(s.len() as isize);
std::ptr::copy_memory(dst, src, to_move);
}
std::ptr::copy_nonoverlapping_memory(src, s.as_ptr(), s.len());
}
},
}
}
shuffle
If T is not copy, but it implements Clone, we can append given slice to the end of the Vec, and move it to the required position using swaps in linear time:
fn push_all_at<T>(v: &mut Vec<T>, mut offset: usize, s: &[T]) where T: Clone + Default {
match (v.len(), s.len()) {
(_, 0) => (),
(0, _) => { v.push_all(s); },
(_, _) => {
assert!(offset <= v.len());
let pad = s.len() - ((v.len() - offset) % s.len());
v.extend(repeat(Default::default()).take(pad));
v.push_all(s);
let total = v.len();
while total - offset >= s.len() {
for i in 0 .. s.len() { v.swap(offset + i, total - s.len() + i); }
offset += s.len();
}
v.truncate(total - pad);
},
}
}
iterators concat
Maybe the best choice will be to not modify Vec at all. For example, if you are going to access the result via iterator, we can just build iterators chain from our chunks:
let v: &[usize] = &[0, 1, 2];
let s: &[usize] = &[3, 4, 5, 6];
let offset = 2;
let chain = v.iter().take(offset).chain(s.iter()).chain(v.iter().skip(offset));
let result: Vec<_> = chain.collect();
println!("Result: {:?}", result);
I was trying to prepend to a vector in rust and found this closed question that was linked here, (despite this question being both prepend and insert AND efficiency. I think my answer would be better as an answer for that other, more precises question because I can't attest to the efficiency), but the following code helped me prepend, (and the opposite.) [I'm sure that the other two answers are more efficient, but the way that I learn, I like having answers that can be cut-n-pasted with examples that demonstrate an application of the answer.]
pub trait Unshift<T> { fn unshift(&mut self, s: &[T]) -> (); }
pub trait UnshiftVec<T> { fn unshift_vec(&mut self, s: Vec<T>) -> (); }
pub trait UnshiftMemoryHog<T> { fn unshift_memory_hog(&mut self, s: Vec<T>) -> (); }
pub trait Shift<T> { fn shift(&mut self) -> (); }
pub trait ShiftN<T> { fn shift_n(&mut self, s: usize) -> (); }
impl<T: std::clone::Clone> ShiftN<T> for Vec<T> {
fn shift_n(&mut self, s: usize) -> ()
// where
// T: std::clone::Clone,
{
self.drain(0..s);
}
}
impl<T: std::clone::Clone> Shift<T> for Vec<T> {
fn shift(&mut self) -> ()
// where
// T: std::clone::Clone,
{
self.drain(0..1);
}
}
impl<T: std::clone::Clone> Unshift<T> for Vec<T> {
fn unshift(&mut self, s: &[T]) -> ()
// where
// T: std::clone::Clone,
{
self.splice(0..0, s.to_vec());
}
}
impl<T: std::clone::Clone> UnshiftVec<T> for Vec<T> {
fn unshift_vec(&mut self, s: Vec<T>) -> ()
where
T: std::clone::Clone,
{
self.splice(0..0, s);
}
}
impl<T: std::clone::Clone> UnshiftMemoryHog<T> for Vec<T> {
fn unshift_memory_hog(&mut self, s: Vec<T>) -> ()
where
T: std::clone::Clone,
{
let mut tmp: Vec<_> = s.to_owned();
//let mut tmp: Vec<_> = s.clone(); // this also works for some data types
/*
let local_s: Vec<_> = self.clone(); // explicit clone()
tmp.extend(local_s); // to vec is possible
*/
tmp.extend(self.clone());
*self = tmp;
//*self = (*tmp).to_vec(); // Just because it compiles, doesn't make it right.
}
}
// this works for: v = unshift(v, &vec![8]);
// (If you don't want to impl Unshift for Vec<T>)
#[allow(dead_code)]
fn unshift_fn<T>(v: Vec<T>, s: &[T]) -> Vec<T>
where
T: Clone,
{
// create a mutable vec and fill it
// with a clone of the array that we want
// at the start of the vec.
let mut tmp: Vec<_> = s.to_owned();
// then we add the existing vector to the end
// of the temporary vector.
tmp.extend(v);
// return the tmp vec that is identitcal
// to unshift-ing the original vec.
tmp
}
/*
N.B. It is sometimes (often?) more memory efficient to reverse
the vector and use push/pop, rather than splice/drain;
Especially if you create your vectors in "stack order" to begin with.
*/
fn main() {
let mut v: Vec<usize> = vec![1, 2, 3];
println!("Before push:\t {:?}", v);
v.push(0);
println!("After push:\t {:?}", v);
v.pop();
println!("popped:\t\t {:?}", v);
v.drain(0..1);
println!("drain(0..1)\t {:?}", v);
/*
// We could use a function
let c = v.clone();
v = unshift_fn(c, &vec![0]);
*/
v.splice(0..0, vec![0]);
println!("splice(0..0, vec![0]) {:?}", v);
v.shift_n(1);
println!("shift\t\t {:?}", v);
v.unshift_memory_hog(vec![8, 16, 31, 1]);
println!("MEMORY guzzler unshift {:?}", v);
//v.drain(0..3);
v.drain(0..=2);
println!("back to the start: {:?}", v);
v.unshift_vec(vec![0]);
println!("zerothed with unshift: {:?}", v);
let mut w = vec![4, 5, 6];
/*
let prepend_this = &[1, 2, 3];
w.unshift_vec(prepend_this.to_vec());
*/
w.unshift(&[1, 2, 3]);
assert_eq!(&w, &[1, 2, 3, 4, 5, 6]);
println!("{:?} == {:?}", &w, &[1, 2, 3, 4, 5, 6]);
}

flattening an array via the AST [duplicate]

I have a JavaScript array like:
[["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]]
How would I go about merging the separate inner arrays into one like:
["$6", "$12", "$25", ...]
ES2019
ES2019 introduced the Array.prototype.flat() method which you could use to flatten the arrays. It is compatible with most environments, although it is only available in Node.js starting with version 11, and not at all in Internet Explorer.
const arrays = [
["$6"],
["$12"],
["$25"],
["$25"],
["$18"],
["$22"],
["$10"]
];
const merge3 = arrays.flat(1); //The depth level specifying how deep a nested array structure should be flattened. Defaults to 1.
console.log(merge3);
Older browsers
For older browsers, you can use Array.prototype.concat to merge arrays:
var arrays = [
["$6"],
["$12"],
["$25"],
["$25"],
["$18"],
["$22"],
["$10"]
];
var merged = [].concat.apply([], arrays);
console.log(merged);
Using the apply method of concat will just take the second parameter as an array, so the last line is identical to this:
var merged = [].concat(["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]);
Here's a short function that uses some of the newer JavaScript array methods to flatten an n-dimensional array.
function flatten(arr) {
return arr.reduce(function (flat, toFlatten) {
return flat.concat(Array.isArray(toFlatten) ? flatten(toFlatten) : toFlatten);
}, []);
}
Usage:
flatten([[1, 2, 3], [4, 5]]); // [1, 2, 3, 4, 5]
flatten([[[1, [1.1]], 2, 3], [4, 5]]); // [1, 1.1, 2, 3, 4, 5]
There is a confusingly hidden method, which constructs a new array without mutating the original one:
var oldArray = [[1],[2,3],[4]];
var newArray = Array.prototype.concat.apply([], oldArray);
console.log(newArray); // [ 1, 2, 3, 4 ]
It can be best done by javascript reduce function.
var arrays = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"], ["$0"], ["$15"],["$3"], ["$75"], ["$5"], ["$100"], ["$7"], ["$3"], ["$75"], ["$5"]];
arrays = arrays.reduce(function(a, b){
return a.concat(b);
}, []);
Or, with ES2015:
arrays = arrays.reduce((a, b) => a.concat(b), []);
js-fiddle
Mozilla docs
There's a new native method called flat to do this exactly.
(As of late 2019, flat is now published in the ECMA 2019 standard, and core-js#3 (babel's library) includes it in their polyfill library)
const arr1 = [1, 2, [3, 4]];
arr1.flat();
// [1, 2, 3, 4]
const arr2 = [1, 2, [3, 4, [5, 6]]];
arr2.flat();
// [1, 2, 3, 4, [5, 6]]
// Flatten 2 levels deep
const arr3 = [2, 2, 5, [5, [5, [6]], 7]];
arr3.flat(2);
// [2, 2, 5, 5, 5, [6], 7];
// Flatten all levels
const arr4 = [2, 2, 5, [5, [5, [6]], 7]];
arr4.flat(Infinity);
// [2, 2, 5, 5, 5, 6, 7];
Most of the answers here don't work on huge (e.g. 200 000 elements) arrays, and even if they do, they're slow.
Here is the fastest solution, which works also on arrays with multiple levels of nesting:
const flatten = function(arr, result = []) {
for (let i = 0, length = arr.length; i < length; i++) {
const value = arr[i];
if (Array.isArray(value)) {
flatten(value, result);
} else {
result.push(value);
}
}
return result;
};
Examples
Huge arrays
flatten(Array(200000).fill([1]));
It handles huge arrays just fine. On my machine this code takes about 14 ms to execute.
Nested arrays
flatten(Array(2).fill(Array(2).fill(Array(2).fill([1]))));
It works with nested arrays. This code produces [1, 1, 1, 1, 1, 1, 1, 1].
Arrays with different levels of nesting
flatten([1, [1], [[1]]]);
It doesn't have any problems with flattening arrays like this one.
Update: it turned out that this solution doesn't work with large arrays. It you're looking for a better, faster solution, check out this answer.
function flatten(arr) {
return [].concat(...arr)
}
Is simply expands arr and passes it as arguments to concat(), which merges all the arrays into one. It's equivalent to [].concat.apply([], arr).
You can also try this for deep flattening:
function deepFlatten(arr) {
return flatten( // return shalowly flattened array
arr.map(x=> // with each x in array
Array.isArray(x) // is x an array?
? deepFlatten(x) // if yes, return deeply flattened x
: x // if no, return just x
)
)
}
See demo on JSBin.
References for ECMAScript 6 elements used in this answer:
Spread operator
Arrow functions
Side note: methods like find() and arrow functions are not supported by all browsers, but it doesn't mean that you can't use these features right now. Just use Babel — it transforms ES6 code into ES5.
You can use Underscore:
var x = [[1], [2], [3, 4]];
_.flatten(x); // => [1, 2, 3, 4]
Generic procedures mean we don't have to rewrite complexity each time we need to utilize a specific behaviour.
concatMap (or flatMap) is exactly what we need in this situation.
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.map(f).reduce(concat, [])
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// your sample data
const data =
[["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]]
console.log (flatten (data))
foresight
And yes, you guessed it correctly, it only flattens one level, which is exactly how it should work
Imagine some data set like this
// Player :: (String, Number) -> Player
const Player = (name,number) =>
[ name, number ]
// team :: ( . Player) -> Team
const Team = (...players) =>
players
// Game :: (Team, Team) -> Game
const Game = (teamA, teamB) =>
[ teamA, teamB ]
// sample data
const teamA =
Team (Player ('bob', 5), Player ('alice', 6))
const teamB =
Team (Player ('ricky', 4), Player ('julian', 2))
const game =
Game (teamA, teamB)
console.log (game)
// [ [ [ 'bob', 5 ], [ 'alice', 6 ] ],
// [ [ 'ricky', 4 ], [ 'julian', 2 ] ] ]
Ok, now say we want to print a roster that shows all the players that will be participating in game …
const gamePlayers = game =>
flatten (game)
gamePlayers (game)
// => [ [ 'bob', 5 ], [ 'alice', 6 ], [ 'ricky', 4 ], [ 'julian', 2 ] ]
If our flatten procedure flattened nested arrays too, we'd end up with this garbage result …
const gamePlayers = game =>
badGenericFlatten(game)
gamePlayers (game)
// => [ 'bob', 5, 'alice', 6, 'ricky', 4, 'julian', 2 ]
rollin' deep, baby
That's not to say sometimes you don't want to flatten nested arrays, too – only that shouldn't be the default behaviour.
We can make a deepFlatten procedure with ease …
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.map(f).reduce(concat, [])
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// deepFlatten :: [[a]] -> [a]
const deepFlatten =
concatMap (x =>
Array.isArray (x) ? deepFlatten (x) : x)
// your sample data
const data =
[0, [1, [2, [3, [4, 5], 6]]], [7, [8]], 9]
console.log (flatten (data))
// [ 0, 1, [ 2, [ 3, [ 4, 5 ], 6 ] ], 7, [ 8 ], 9 ]
console.log (deepFlatten (data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
There. Now you have a tool for each job – one for squashing one level of nesting, flatten, and one for obliterating all nesting deepFlatten.
Maybe you can call it obliterate or nuke if you don't like the name deepFlatten.
Don't iterate twice !
Of course the above implementations are clever and concise, but using a .map followed by a call to .reduce means we're actually doing more iterations than necessary
Using a trusty combinator I'm calling mapReduce helps keep the iterations to a minium; it takes a mapping function m :: a -> b, a reducing function r :: (b,a) ->b and returns a new reducing function - this combinator is at the heart of transducers; if you're interested, I've written other answers about them
// mapReduce = (a -> b, (b,a) -> b, (b,a) -> b)
const mapReduce = (m,r) =>
(acc,x) => r (acc, m (x))
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.reduce (mapReduce (f, concat), [])
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// deepFlatten :: [[a]] -> [a]
const deepFlatten =
concatMap (x =>
Array.isArray (x) ? deepFlatten (x) : x)
// your sample data
const data =
[ [ [ 1, 2 ],
[ 3, 4 ] ],
[ [ 5, 6 ],
[ 7, 8 ] ] ]
console.log (flatten (data))
// [ [ 1. 2 ], [ 3, 4 ], [ 5, 6 ], [ 7, 8 ] ]
console.log (deepFlatten (data))
// [ 1, 2, 3, 4, 5, 6, 7, 8 ]
To flatten an array of single element arrays, you don't need to import a library, a simple loop is both the simplest and most efficient solution :
for (var i = 0; i < a.length; i++) {
a[i] = a[i][0];
}
To downvoters: please read the question, don't downvote because it doesn't suit your very different problem. This solution is both the fastest and simplest for the asked question.
Another ECMAScript 6 solution in functional style:
Declare a function:
const flatten = arr => arr.reduce(
(a, b) => a.concat(Array.isArray(b) ? flatten(b) : b), []
);
and use it:
flatten( [1, [2,3], [4,[5,[6]]]] ) // -> [1,2,3,4,5,6]
const flatten = arr => arr.reduce(
(a, b) => a.concat(Array.isArray(b) ? flatten(b) : b), []
);
console.log( flatten([1, [2,3], [4,[5],[6,[7,8,9],10],11],[12],13]) )
Consider also a native function Array.prototype.flat() (proposal for ES6) available in last releases of modern browsers. Thanks to #(Константин Ван) and #(Mark Amery) mentioned it in the comments.
The flat function has one parameter, specifying the expected depth of array nesting, which equals 1 by default.
[1, 2, [3, 4]].flat(); // -> [1, 2, 3, 4]
[1, 2, [3, 4, [5, 6]]].flat(); // -> [1, 2, 3, 4, [5, 6]]
[1, 2, [3, 4, [5, 6]]].flat(2); // -> [1, 2, 3, 4, 5, 6]
[1, 2, [3, 4, [5, 6]]].flat(Infinity); // -> [1, 2, 3, 4, 5, 6]
let arr = [1, 2, [3, 4]];
console.log( arr.flat() );
arr = [1, 2, [3, 4, [5, 6]]];
console.log( arr.flat() );
console.log( arr.flat(1) );
console.log( arr.flat(2) );
console.log( arr.flat(Infinity) );
You can also try the new Array.flat() method. It works in the following manner:
let arr = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]].flat()
console.log(arr);
The flat() method creates a new array with all sub-array elements concatenated into it recursively up to the 1 layer of depth (i.e. arrays inside arrays)
If you want to also flatten out 3 dimensional or even higher dimensional arrays you simply call the flat method multiple times. For example (3 dimensions):
let arr = [1,2,[3,4,[5,6]]].flat().flat().flat();
console.log(arr);
Be careful!
Array.flat() method is relatively new. Older browsers like ie might not have implemented the method. If you want you code to work on all browsers you might have to transpile your JS to an older version. Check for MDN web docs for current browser compatibility.
A solution for the more general case, when you may have some non-array elements in your array.
function flattenArrayOfArrays(a, r){
if(!r){ r = []}
for(var i=0; i<a.length; i++){
if(a[i].constructor == Array){
flattenArrayOfArrays(a[i], r);
}else{
r.push(a[i]);
}
}
return r;
}
What about using reduce(callback[, initialValue]) method of JavaScript 1.8
list.reduce((p,n) => p.concat(n),[]);
Would do the job.
const common = arr.reduce((a, b) => [...a, ...b], [])
You can use Array.flat() with Infinity for any depth of nested array.
var arr = [ [1,2,3,4], [1,2,[1,2,3]], [1,2,3,4,5,[1,2,3,4,[1,2,3,4]]], [[1,2,3,4], [1,2,[1,2,3]], [1,2,3,4,5,[1,2,3,4,[1,2,3,4]]]] ];
let flatten = arr.flat(Infinity)
console.log(flatten)
check here for browser compatibility
Please note: When Function.prototype.apply ([].concat.apply([], arrays)) or the spread operator ([].concat(...arrays)) is used in order to flatten an array, both can cause stack overflows for large arrays, because every argument of a function is stored on the stack.
Here is a stack-safe implementation in functional style that weighs up the most important requirements against one another:
reusability
readability
conciseness
performance
// small, reusable auxiliary functions:
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc); // aka reduce
const uncurry = f => (a, b) => f(a) (b);
const concat = xs => y => xs.concat(y);
// the actual function to flatten an array - a self-explanatory one-line:
const flatten = xs => foldl(concat) ([]) (xs);
// arbitrary array sizes (until the heap blows up :D)
const xs = [[1,2,3],[4,5,6],[7,8,9]];
console.log(flatten(xs));
// Deriving a recursive solution for deeply nested arrays is trivially now
// yet more small, reusable auxiliary functions:
const map = f => xs => xs.map(apply(f));
const apply = f => a => f(a);
const isArray = Array.isArray;
// the derived recursive function:
const flattenr = xs => flatten(map(x => isArray(x) ? flattenr(x) : x) (xs));
const ys = [1,[2,[3,[4,[5],6,],7],8],9];
console.log(flattenr(ys));
As soon as you get used to small arrow functions in curried form, function composition and higher order functions, this code reads like prose. Programming then merely consists of putting together small building blocks that always work as expected, because they don't contain any side effects.
ES6 One Line Flatten
See lodash flatten, underscore flatten (shallow true)
function flatten(arr) {
return arr.reduce((acc, e) => acc.concat(e), []);
}
or
function flatten(arr) {
return [].concat.apply([], arr);
}
Tested with
test('already flatted', () => {
expect(flatten([1, 2, 3, 4, 5])).toEqual([1, 2, 3, 4, 5]);
});
test('flats first level', () => {
expect(flatten([1, [2, [3, [4]], 5]])).toEqual([1, 2, [3, [4]], 5]);
});
ES6 One Line Deep Flatten
See lodash flattenDeep, underscore flatten
function flattenDeep(arr) {
return arr.reduce((acc, e) => Array.isArray(e) ? acc.concat(flattenDeep(e)) : acc.concat(e), []);
}
Tested with
test('already flatted', () => {
expect(flattenDeep([1, 2, 3, 4, 5])).toEqual([1, 2, 3, 4, 5]);
});
test('flats', () => {
expect(flattenDeep([1, [2, [3, [4]], 5]])).toEqual([1, 2, 3, 4, 5]);
});
Using the spread operator:
const input = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]];
const output = [].concat(...input);
console.log(output); // --> ["$6", "$12", "$25", "$25", "$18", "$22", "$10"]
I recommend a space-efficient generator function:
function* flatten(arr) {
if (!Array.isArray(arr)) yield arr;
else for (let el of arr) yield* flatten(el);
}
// Example:
console.log(...flatten([1,[2,[3,[4]]]])); // 1 2 3 4
If desired, create an array of flattened values as follows:
let flattened = [...flatten([1,[2,[3,[4]]]])]; // [1, 2, 3, 4]
If you only have arrays with 1 string element:
[["$6"], ["$12"], ["$25"], ["$25"]].join(',').split(',');
will do the job. Bt that specifically matches your code example.
I have done it using recursion and closures
function flatten(arr) {
var temp = [];
function recursiveFlatten(arr) {
for(var i = 0; i < arr.length; i++) {
if(Array.isArray(arr[i])) {
recursiveFlatten(arr[i]);
} else {
temp.push(arr[i]);
}
}
}
recursiveFlatten(arr);
return temp;
}
A Haskellesque approach
function flatArray([x,...xs]){
return x ? [...Array.isArray(x) ? flatArray(x) : [x], ...flatArray(xs)] : [];
}
var na = [[1,2],[3,[4,5]],[6,7,[[[8],9]]],10];
fa = flatArray(na);
console.log(fa);
ES6 way:
const flatten = arr => arr.reduce((acc, next) => acc.concat(Array.isArray(next) ? flatten(next) : next), [])
const a = [1, [2, [3, [4, [5]]]]]
console.log(flatten(a))
ES5 way for flatten function with ES3 fallback for N-times nested arrays:
var flatten = (function() {
if (!!Array.prototype.reduce && !!Array.isArray) {
return function(array) {
return array.reduce(function(prev, next) {
return prev.concat(Array.isArray(next) ? flatten(next) : next);
}, []);
};
} else {
return function(array) {
var arr = [];
var i = 0;
var len = array.length;
var target;
for (; i < len; i++) {
target = array[i];
arr = arr.concat(
(Object.prototype.toString.call(target) === '[object Array]') ? flatten(target) : target
);
}
return arr;
};
}
}());
var a = [1, [2, [3, [4, [5]]]]];
console.log(flatten(a));
if you use lodash, you can just use its flatten method: https://lodash.com/docs/4.17.14#flatten
The nice thing about lodash is that it also has methods to flatten the arrays:
i) recursively: https://lodash.com/docs/4.17.14#flattenDeep
ii) upto n levels of nesting: https://lodash.com/docs/4.17.14#flattenDepth
For example
const _ = require("lodash");
const pancake = _.flatten(array)
I was goofing with ES6 Generators the other day and wrote this gist. Which contains...
function flatten(arrayOfArrays=[]){
function* flatgen() {
for( let item of arrayOfArrays ) {
if ( Array.isArray( item )) {
yield* flatten(item)
} else {
yield item
}
}
}
return [...flatgen()];
}
var flatArray = flatten([[1, [4]],[2],[3]]);
console.log(flatArray);
Basically I'm creating a generator that loops over the original input array, if it finds an array it uses the yield* operator in combination with recursion to continually flatten the internal arrays. If the item is not an array it just yields the single item. Then using the ES6 Spread operator (aka splat operator) I flatten out the generator into a new array instance.
I haven't tested the performance of this, but I figure it is a nice simple example of using generators and the yield* operator.
But again, I was just goofing so I'm sure there are more performant ways to do this.
just the best solution without lodash
let flatten = arr => [].concat.apply([], arr.map(item => Array.isArray(item) ? flatten(item) : item))
I would rather transform the whole array, as-is, to a string, but unlike other answers, would do that using JSON.stringify and not use the toString() method, which produce an unwanted result.
With that JSON.stringify output, all that's left is to remove all brackets, wrap the result with start & ending brackets yet again, and serve the result with JSON.parse which brings the string back to "life".
Can handle infinite nested arrays without any speed costs.
Can rightly handle Array items which are strings containing commas.
var arr = ["abc",[[[6]]],["3,4"],"2"];
var s = "[" + JSON.stringify(arr).replace(/\[|]/g,'') +"]";
var flattened = JSON.parse(s);
console.log(flattened)
Only for multidimensional Array of Strings/Numbers (not Objects)
Ways for making flatten array
using Es6 flat()
using Es6 reduce()
using recursion
using string manipulation
[1,[2,[3,[4,[5,[6,7],8],9],10]]] - [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// using Es6 flat()
let arr = [1,[2,[3,[4,[5,[6,7],8],9],10]]]
console.log(arr.flat(Infinity))
// using Es6 reduce()
let flatIt = (array) => array.reduce(
(x, y) => x.concat(Array.isArray(y) ? flatIt(y) : y), []
)
console.log(flatIt(arr))
// using recursion
function myFlat(array) {
let flat = [].concat(...array);
return flat.some(Array.isArray) ? myFlat(flat) : flat;
}
console.log(myFlat(arr));
// using string manipulation
let strArr = arr.toString().split(',');
for(let i=0;i<strArr.length;i++)
strArr[i]=parseInt(strArr[i]);
console.log(strArr)
I think array.flat(Infinity) is a perfect solution. But flat function is a relatively new function and may not run in older versions of browsers. We can use recursive function for solving this.
const arr = ["A", ["B", [["B11", "B12", ["B131", "B132"]], "B2"]], "C", ["D", "E", "F", ["G", "H", "I"]]]
const flatArray = (arr) => {
const res = []
for (const item of arr) {
if (Array.isArray(item)) {
const subRes = flatArray(item)
res.push(...subRes)
} else {
res.push(item)
}
}
return res
}
console.log(flatArray(arr))

How to increment every number in a vector without the error "cannot borrow as mutable more than once at a time"?

This code is supposed to increment each value in a vector by 1:
fn main() {
let mut v = vec![2, 3, 1, 4, 2, 5];
let i = v.iter_mut();
for j in i {
*j += 1;
println!("{}", j);
}
println!("{:?}", &mut v);
}
It doesn't work because of the borrowing rules of Rust:
error[E0499]: cannot borrow `v` as mutable more than once at a time
--> src/main.rs:8:27
|
3 | let i = v.iter_mut();
| - first mutable borrow occurs here
...
8 | println!("{:?}", &mut v);
| ^ second mutable borrow occurs here
9 | }
| - first borrow ends here
How can I accomplish this task?
Don't store the mutable iterator; use it directly in the loop instead:
fn main() {
let mut v = vec![2, 3, 1, 4, 2, 5];
for j in v.iter_mut() { // or for j in &mut v
*j += 1;
println!("{}", j);
}
println!("{:?}", &v); // note that I dropped mut here; it's not needed
}
Your code will work as-is in a future version of Rust thanks to non-lexical lifetimes:
#![feature(nll)]
fn main() {
let mut v = vec![2, 3, 1, 4, 2, 5];
let i = v.iter_mut();
for j in i {
*j += 1;
println!("{}", j);
}
println!("{:?}", &mut v);
}
playground
You call also just use map and collect like,
>> let mut v = vec![5,1,4,2,3];
>> v.iter_mut().map(|x| *x += 1).collect::<Vec<_>>();
>> v
[6, 2, 5, 3, 4]
In my opinion the simplest and most readable solution:
#![feature(nll)]
fn main() {
let mut v = vec![2, 3, 1, 4, 2, 5];
for i in 0..v.len() {
v[i] += 1;
println!("{}", j);
}
println!("{:?}", v);
}

What is the best way to concatenate vectors in Rust?

Is it even possible to concatenate vectors in Rust? If so, is there an elegant way to do so? I have something like this:
let mut a = vec![1, 2, 3];
let b = vec![4, 5, 6];
for val in &b {
a.push(val);
}
Does anyone know of a better way?
The structure std::vec::Vec has method append():
fn append(&mut self, other: &mut Vec<T>)
Moves all the elements of other into Self, leaving other empty.
From your example, the following code will concatenate two vectors by mutating a and b:
fn main() {
let mut a = vec![1, 2, 3];
let mut b = vec![4, 5, 6];
a.append(&mut b);
assert_eq!(a, [1, 2, 3, 4, 5, 6]);
assert_eq!(b, []);
}
Alternatively, you can use Extend::extend() to append all elements of something that can be turned into an iterator (like Vec) to a given vector:
let mut a = vec![1, 2, 3];
let b = vec![4, 5, 6];
a.extend(b);
assert_eq!(a, [1, 2, 3, 4, 5, 6]);
// b is moved and can't be used anymore
Note that the vector b is moved instead of emptied. If your vectors contain elements that implement Copy, you can pass an immutable reference to one vector to extend() instead in order to avoid the move. In that case the vector b is not changed:
let mut a = vec![1, 2, 3];
let b = vec![4, 5, 6];
a.extend(&b);
assert_eq!(a, [1, 2, 3, 4, 5, 6]);
assert_eq!(b, [4, 5, 6]);
I can't make it in one line. Damian Dziaduch
It is possible to do it in one line by using chain():
let c: Vec<i32> = a.into_iter().chain(b.into_iter()).collect(); // Consumed
let c: Vec<&i32> = a.iter().chain(b.iter()).collect(); // Referenced
let c: Vec<i32> = a.iter().cloned().chain(b.iter().cloned()).collect(); // Cloned
let c: Vec<i32> = a.iter().copied().chain(b.iter().copied()).collect(); // Copied
There are infinite ways.
Regarding the performance, slice::concat, append and extend are about the same. If you don't need the results immediately, making it a chained iterator is the fastest; if you need to collect(), it is the slowest:
#![feature(test)]
extern crate test;
use test::Bencher;
#[bench]
fn bench_concat___init__(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
});
}
#[bench]
fn bench_concat_append(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
x.append(&mut y)
});
}
#[bench]
fn bench_concat_extend(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
x.extend(y)
});
}
#[bench]
fn bench_concat_concat(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
[x, y].concat()
});
}
#[bench]
fn bench_concat_iter_chain(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
x.into_iter().chain(y.into_iter())
});
}
#[bench]
fn bench_concat_iter_chain_collect(b: &mut Bencher) {
b.iter(|| {
let mut x = vec![1i32; 100000];
let mut y = vec![2i32; 100000];
x.into_iter().chain(y.into_iter()).collect::<Vec<i32>>()
});
}
running 6 tests
test bench_concat___init__ ... bench: 27,261 ns/iter (+/- 3,129)
test bench_concat_append ... bench: 52,820 ns/iter (+/- 9,243)
test bench_concat_concat ... bench: 53,566 ns/iter (+/- 5,748)
test bench_concat_extend ... bench: 53,920 ns/iter (+/- 7,329)
test bench_concat_iter_chain ... bench: 26,901 ns/iter (+/- 1,306)
test bench_concat_iter_chain_collect ... bench: 190,334 ns/iter (+/- 16,107)
I think the best method to concatenate one or more vector is this:
let first_number: Vec<usize> = Vec::from([0]);
let final_number: Vec<usize> = Vec::from([3]);
let middle_numbers: Vec<usize> = Vec::from([1,2]);
let numbers = [input_layer, middle_layers, output_layer].concat();
One option is to use the extend method, which allows you to append the elements of one vector to another. Like so:
let mut a = vec![1, 2, 3];
let b = vec![4, 5, 6];
a.extend(b);
This will append the elements of b to the end of a, resulting in a vector a with the elements [1, 2, 3, 4, 5, 6].
Another way is you can use the concat function from the std::iter module to concatenate two vectors. This function takes two vectors as arguments and returns a new vector that is the concatenation of the two input vectors. Like so:
use std::iter;
let a = vec![1, 2, 3];
let b = vec![4, 5, 6];
let c = iter::concat(a, b);
This will create a new vector c with the elements [1, 2, 3, 4, 5, 6].
You can also use the [..] operator to concatenate two vectors. This operator allows you to create a new vector that is the concatenation of two input vectors. Like so:
let a = vec![1, 2, 3];
let b = vec![4, 5, 6];
let c = [&a[..], &b[..]].concat();
This will create a new vector c with the elements [1, 2, 3, 4, 5, 6].
made some fix on Mattia Samiolo's answer:
let first_number: Vec<usize> = Vec::from([0]);
let final_number: Vec<usize> = Vec::from([3]);
let middle_numbers: Vec<usize> = Vec::from([1, 2]);
let numbers = [first_number, middle_numbers, final_number].concat();
println!("{:?}", numbers);

How do I get a slice of a Vec<T> in Rust?

I can not find within the documentation of Vec<T> how to retrieve a slice from a specified range.
Is there something like this in the standard library:
let a = vec![1, 2, 3, 4];
let suba = a.subvector(0, 2); // Contains [1, 2];
The documentation for Vec covers this in the section titled "slicing".
You can create a slice of a Vec or array by indexing it with a Range (or RangeInclusive, RangeFrom, RangeTo, RangeToInclusive, or RangeFull), for example:
fn main() {
let a = vec![1, 2, 3, 4, 5];
// With a start and an end
println!("{:?}", &a[1..4]);
// With a start and an end, inclusive
println!("{:?}", &a[1..=3]);
// With just a start
println!("{:?}", &a[2..]);
// With just an end
println!("{:?}", &a[..3]);
// With just an end, inclusive
println!("{:?}", &a[..=2]);
// All elements
println!("{:?}", &a[..]);
}
If you wish to convert the entire Vec to a slice, you can use deref coercion:
fn main() {
let a = vec![1, 2, 3, 4, 5];
let b: &[i32] = &a;
println!("{:?}", b);
}
This coercion is automatically applied when calling a function:
fn print_it(b: &[i32]) {
println!("{:?}", b);
}
fn main() {
let a = vec![1, 2, 3, 4, 5];
print_it(&a);
}
You can also call Vec::as_slice, but it's a bit less common:
fn main() {
let a = vec![1, 2, 3, 4, 5];
let b = a.as_slice();
println!("{:?}", b);
}
See also:
Why is it discouraged to accept a reference to a String (&String), Vec (&Vec), or Box (&Box) as a function argument?

Resources