Return multiple values from ES6 map() function - collections

Say I have something like this:
let values = [1,2,3,4];
let newValues = values.map((v) => {
return v *v ;
});
console.log(newValues); //[1,4,9,16]
Pretty straight forward.
Now what if I want to return multiple values for each of my objects?
eg.
let values = [1,2,3,4];
let newValues = values.map((v) => {
return [v *v, v*v*v, v+1] ;
});
console.log(newValues); //This is what I want to get
//[1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64, 5]
I can use a reduce function
let values = [1,2,3,4];
let newValues = values.map((v) => {
return [v *v, v*v*v,v+1] ;
}).reduce((a, c) => {
return a.concat(c);
});
console.log(newValues);
But is that the best way to do this?

With using only one reduce() you can do this. you don't need map().
better approach is this:
const values = [1,2,3,4];
const newValues= values.reduce((acc, cur) => {
return acc.concat([cur*cur , cur*cur*cur, cur+1]);
// or acc.push([cur*cur , cur*cur*cur, cur+1]); return acc;
}, []);
console.log('newValues =', newValues)
EDIT:
The better approach is just using a flatMap (as #ori-drori mentioned):
const values = [1,2,3,4];
const newValues = values.flatMap((v) => [v *v, v*v*v, v+1]);
console.log(JSON.stringify(newValues)); //[1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64, 5]

If you need to map an array, and flatten the results you can use Array.flatMap():
const values = [1,2,3,4];
const newValues = values.flatMap((v) => [v *v, v*v*v, v+1]);
console.log(JSON.stringify(newValues)); //[1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64, 5]
If Array.flatMap() is not available flatten the results of the map by using Array#concat and the spread syntax:
const values = [1,2,3,4];
const newValues = [].concat(...values.map((v) => [v *v, v*v*v, v+1]));
console.log(JSON.stringify(newValues)); //[1, 1, 2, 4, 8, 3, 9, 27, 4, 16, 64, 5]

By definition, .map() returns an array of the same length as the input array so it's just not a good choice when you want to create a different length result.
From an efficiency point of view, it's probably best to use for/of and avoid creating lots of intermediate arrays:
let values = [1,2,3,4];
let result = [];
for (let val of values) {
result.push(val*val , val*val*val, val+1);
}
If you wanted to use array methods efficiently, you could use .reduce() with .push() to avoid creating a new array on every iteration:
let values = [1,2,3,4];
let result = values.reduce((array, val) => {
array.push(val*val , val*val*val, val+1);
return array;
}, []);

Better use flatMap from lodash
const output = _.flatMap([1,2,3,4], (v, index, arr) => [v *v, v*v*v, v+1])
output: [
1,
1,
2,
4,
8,
3,
9,
27,
4,
16,
64,
5
]

Related

Can this recursive function take less stack space (without rewriting as a loop)?

I've got some code which is running for a while and then throwing stack overflow errors. Based on the behavior and debugging, I do not think this is a case of infinite recursion, but instead of deep (but finite) and inefficient recursion.
I'm pretty convinced the following code is to blame:
fn get_descendant_leaves(&self, p: Point) -> Vec<Point> {
let children = self.get_children(p);
if children.is_empty() {
vec![p]
} else {
children
.iter()
.map(|child_p| self.get_descendant_leaves(child_p.0))
.flatten()
.collect()
}
}
Note the recursive self.get_descendant_leaves call. get_children is a non-recursive function that does a little work to recompute immediate children based on what's stored at that location (maximum of 8, usually more like 1-3). Don't worry about the self parameter - there is a lot of information hiding behind it, but here it is only needed to compute the children of a given point.
I've found that the above function hits a depth of 2-3k simultaneously from multiple threads immediately before the program crashes, which is why I'm convinced it is to blame.
I'm sure I could fix this by manually re-implementing the function to instead loop with a couple of mutable vectors - one of discovered leaves, and one of non-leaves that still need to be explored. However, I sort of suspect that this is something Rust could do for me, were I only treating it more kindly.
Is there something I could do to represent this to rust in a way that would automatically avoid the growing call stack? I'm wondering if I handled the flatten or collect differently, maybe rust's iter could would automatically handle this in a way similar to the manual solution described above. (My reasoning is that because the recursive call is in a closure that could(?) be called lazily, this seems like something the compiler might be able to unwind on its own.)
(It has to run for an hour or more to grow complex enough to crash, so experimentation is slow.)
Bonus question: Debugging this to this state took me forever, using a mix of printlns and lldb's thread backtrace to see where things were when stuff exploded. What else should have been in my toolkit for investigating this? Googling "how to diagnose rust stack overflow" turned up mostly discouraging results.
EDIT: here's a parallel structure to make debugging more fruitful. Similarly to the code above, it recursively explores a tree, building a flattened vector from "leaves" where the base condition is met.
In the example case, the "tree" exists only in theory, but can be though of as a fibbonachi-esque tree where Tree(i) has as children Tree(i-1) and Tree(i-2) for i >=1, with Tree(1) and Tree(0) as leaves.
Can this be done keeping the recursive function definition, but in such a way that Rust does not actually explode the stack in computing it?
fn main() {
let awful = awful_vec_builder(10, 0);
println!("See it's awful: {:?}", awful);
println!("Let's smash the stack:");
let more_awful = awful_vec_builder(5000, 0);
println!("Please don't reach this case: {}", more_awful.len())
}
fn awful_vec_builder(i: usize, depth: usize) -> Vec<usize> {
if i < 2 {
vec![depth]
//Note that using vec![] here still overflows the stack
} else {
let v = vec![i - 1, i - 2];
v.iter()
.map(|i| awful_vec_builder(*i, depth + 1))
.flatten()
.collect()
}
}
I think the biggest problem in your implementation is the repeated flatten and collect calls. Why do those? You basically re-construct the vector in every single call.
Instead, pass the result vector through and write directly into it:
fn main() {
let awful = awful_vec_builder(10, 0);
println!("See it's awful: {:?}", awful);
println!("Let's smash the stack:");
let more_awful = awful_vec_builder(5000, 0);
println!("Please don't reach this case: {}", more_awful.len())
}
fn awful_vec_builder(i: usize, depth: usize) -> Vec<usize> {
fn awful_vec_builder_impl(i: usize, depth: usize, result: &mut Vec<usize>) {
if i < 2 {
result.push(depth);
} else {
let v = vec![i - 1, i - 2];
for i in v {
awful_vec_builder_impl(i, depth + 1, result);
}
}
}
let mut result = Vec::new();
awful_vec_builder_impl(i, depth, &mut result);
result
}
See it's awful: [9, 9, 8, 8, 8, 8, 8, 7, 8, 8, 7, 7, 7, 8, 8, 7, 7, 7, 7, 7, 6, 8, 8, 7, 7, 7, 7, 7, 6, 7, 7, 6, 6, 6, 8, 8, 7, 7, 7, 7, 7, 6, 7, 7, 6, 6, 6, 7, 7, 6, 6, 6, 6, 6, 5, 8, 8, 7, 7, 7, 7, 7, 6, 7, 7, 6, 6, 6, 7, 7, 6, 6, 6, 6, 6, 5, 7, 7, 6, 6, 6, 6, 6, 5, 6, 6, 5, 5, 5]
Let's smash the stack:
... hangs for a really long time ...
For your first example, here is some boilerplate code that makes it actually compile and crash, as specified:
fn main() {
let tree = Tree;
let awful = tree.get_descendant_leaves(Point { i: 10, depth: 0 });
println!("See it's awful: {:?}", awful);
println!("Let's smash the stack:");
let more_awful = tree.get_descendant_leaves(Point { i: 5000, depth: 0 });
println!("Please don't reach this case: {}", more_awful.len())
}
struct PointWrapper(Point);
#[derive(Debug, Copy, Clone)]
struct Point {
i: usize,
depth: usize,
}
struct Tree;
impl Tree {
fn get_children(&self, p: Point) -> Vec<PointWrapper> {
if p.i < 2 {
vec![]
} else {
vec![
PointWrapper(Point {
i: p.i - 1,
depth: p.depth + 1,
}),
PointWrapper(Point {
i: p.i - 2,
depth: p.depth + 1,
}),
]
}
}
fn get_descendant_leaves(&self, p: Point) -> Vec<Point> {
let children = self.get_children(p);
if children.is_empty() {
vec![p]
} else {
children
.iter()
.map(|child_p| self.get_descendant_leaves(child_p.0))
.flatten()
.collect()
}
}
}
Similar to my other example, you could fix it like this:
fn get_descendant_leaves(&self, p: Point) -> Vec<Point> {
fn get_descendant_leaves_impl(this: &Tree, p: Point, result: &mut Vec<Point>) {
let children = this.get_children(p);
if children.is_empty() {
result.push(p);
} else {
for child_p in children {
get_descendant_leaves_impl(this, child_p.0, result);
}
}
}
let mut result = Vec::new();
get_descendant_leaves_impl(self, p, &mut result);
result
}
That said, if there is a recursive solution, there is always a better iterative solution.
Like this one:
fn get_descendant_leaves(&self, p: Point) -> Vec<Point> {
let mut result = Vec::new();
let mut queue = Vec::new();
queue.push(p);
while let Some(p) = queue.pop() {
let children = self.get_children(p);
if children.is_empty() {
result.push(p);
} else {
queue.extend(children.into_iter().map(|child| child.0).rev())
}
}
result
}

How to replace portion of a vector using Rust?

What is the best way to replace a specific portion of a vector with a new vector?
As of now, I am using hardcoded code to replace the vector. What is the most effective way to achieve this?
fn main() {
let mut v = vec![1, 2, 3, 4, 5, 6, 7, 8, 9];
let u = vec![0,0,0,0];
v[2] = u[0];
v[3] = u[1];
v[4] = u[2];
v[5] = u[3];
println!("v = {:?}", v);
}
Permalink to the playground
Is there any function to replace the vector with given indices?
For Copy types:
v[2..][..u.len()].copy_from_slice(&u);
Playground.
For non-Copy types:
v.splice(2..2 + u.len(), u);
Playground.
Another way:
let offset : usize = 2;
u.iter().enumerate().for_each(|(index, &val)| {
v[index + offset] = val;
});
Playground

flattening an array via the AST [duplicate]

I have a JavaScript array like:
[["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]]
How would I go about merging the separate inner arrays into one like:
["$6", "$12", "$25", ...]
ES2019
ES2019 introduced the Array.prototype.flat() method which you could use to flatten the arrays. It is compatible with most environments, although it is only available in Node.js starting with version 11, and not at all in Internet Explorer.
const arrays = [
["$6"],
["$12"],
["$25"],
["$25"],
["$18"],
["$22"],
["$10"]
];
const merge3 = arrays.flat(1); //The depth level specifying how deep a nested array structure should be flattened. Defaults to 1.
console.log(merge3);
Older browsers
For older browsers, you can use Array.prototype.concat to merge arrays:
var arrays = [
["$6"],
["$12"],
["$25"],
["$25"],
["$18"],
["$22"],
["$10"]
];
var merged = [].concat.apply([], arrays);
console.log(merged);
Using the apply method of concat will just take the second parameter as an array, so the last line is identical to this:
var merged = [].concat(["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]);
Here's a short function that uses some of the newer JavaScript array methods to flatten an n-dimensional array.
function flatten(arr) {
return arr.reduce(function (flat, toFlatten) {
return flat.concat(Array.isArray(toFlatten) ? flatten(toFlatten) : toFlatten);
}, []);
}
Usage:
flatten([[1, 2, 3], [4, 5]]); // [1, 2, 3, 4, 5]
flatten([[[1, [1.1]], 2, 3], [4, 5]]); // [1, 1.1, 2, 3, 4, 5]
There is a confusingly hidden method, which constructs a new array without mutating the original one:
var oldArray = [[1],[2,3],[4]];
var newArray = Array.prototype.concat.apply([], oldArray);
console.log(newArray); // [ 1, 2, 3, 4 ]
It can be best done by javascript reduce function.
var arrays = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"], ["$0"], ["$15"],["$3"], ["$75"], ["$5"], ["$100"], ["$7"], ["$3"], ["$75"], ["$5"]];
arrays = arrays.reduce(function(a, b){
return a.concat(b);
}, []);
Or, with ES2015:
arrays = arrays.reduce((a, b) => a.concat(b), []);
js-fiddle
Mozilla docs
There's a new native method called flat to do this exactly.
(As of late 2019, flat is now published in the ECMA 2019 standard, and core-js#3 (babel's library) includes it in their polyfill library)
const arr1 = [1, 2, [3, 4]];
arr1.flat();
// [1, 2, 3, 4]
const arr2 = [1, 2, [3, 4, [5, 6]]];
arr2.flat();
// [1, 2, 3, 4, [5, 6]]
// Flatten 2 levels deep
const arr3 = [2, 2, 5, [5, [5, [6]], 7]];
arr3.flat(2);
// [2, 2, 5, 5, 5, [6], 7];
// Flatten all levels
const arr4 = [2, 2, 5, [5, [5, [6]], 7]];
arr4.flat(Infinity);
// [2, 2, 5, 5, 5, 6, 7];
Most of the answers here don't work on huge (e.g. 200 000 elements) arrays, and even if they do, they're slow.
Here is the fastest solution, which works also on arrays with multiple levels of nesting:
const flatten = function(arr, result = []) {
for (let i = 0, length = arr.length; i < length; i++) {
const value = arr[i];
if (Array.isArray(value)) {
flatten(value, result);
} else {
result.push(value);
}
}
return result;
};
Examples
Huge arrays
flatten(Array(200000).fill([1]));
It handles huge arrays just fine. On my machine this code takes about 14 ms to execute.
Nested arrays
flatten(Array(2).fill(Array(2).fill(Array(2).fill([1]))));
It works with nested arrays. This code produces [1, 1, 1, 1, 1, 1, 1, 1].
Arrays with different levels of nesting
flatten([1, [1], [[1]]]);
It doesn't have any problems with flattening arrays like this one.
Update: it turned out that this solution doesn't work with large arrays. It you're looking for a better, faster solution, check out this answer.
function flatten(arr) {
return [].concat(...arr)
}
Is simply expands arr and passes it as arguments to concat(), which merges all the arrays into one. It's equivalent to [].concat.apply([], arr).
You can also try this for deep flattening:
function deepFlatten(arr) {
return flatten( // return shalowly flattened array
arr.map(x=> // with each x in array
Array.isArray(x) // is x an array?
? deepFlatten(x) // if yes, return deeply flattened x
: x // if no, return just x
)
)
}
See demo on JSBin.
References for ECMAScript 6 elements used in this answer:
Spread operator
Arrow functions
Side note: methods like find() and arrow functions are not supported by all browsers, but it doesn't mean that you can't use these features right now. Just use Babel — it transforms ES6 code into ES5.
You can use Underscore:
var x = [[1], [2], [3, 4]];
_.flatten(x); // => [1, 2, 3, 4]
Generic procedures mean we don't have to rewrite complexity each time we need to utilize a specific behaviour.
concatMap (or flatMap) is exactly what we need in this situation.
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.map(f).reduce(concat, [])
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// your sample data
const data =
[["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]]
console.log (flatten (data))
foresight
And yes, you guessed it correctly, it only flattens one level, which is exactly how it should work
Imagine some data set like this
// Player :: (String, Number) -> Player
const Player = (name,number) =>
[ name, number ]
// team :: ( . Player) -> Team
const Team = (...players) =>
players
// Game :: (Team, Team) -> Game
const Game = (teamA, teamB) =>
[ teamA, teamB ]
// sample data
const teamA =
Team (Player ('bob', 5), Player ('alice', 6))
const teamB =
Team (Player ('ricky', 4), Player ('julian', 2))
const game =
Game (teamA, teamB)
console.log (game)
// [ [ [ 'bob', 5 ], [ 'alice', 6 ] ],
// [ [ 'ricky', 4 ], [ 'julian', 2 ] ] ]
Ok, now say we want to print a roster that shows all the players that will be participating in game …
const gamePlayers = game =>
flatten (game)
gamePlayers (game)
// => [ [ 'bob', 5 ], [ 'alice', 6 ], [ 'ricky', 4 ], [ 'julian', 2 ] ]
If our flatten procedure flattened nested arrays too, we'd end up with this garbage result …
const gamePlayers = game =>
badGenericFlatten(game)
gamePlayers (game)
// => [ 'bob', 5, 'alice', 6, 'ricky', 4, 'julian', 2 ]
rollin' deep, baby
That's not to say sometimes you don't want to flatten nested arrays, too – only that shouldn't be the default behaviour.
We can make a deepFlatten procedure with ease …
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.map(f).reduce(concat, [])
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// deepFlatten :: [[a]] -> [a]
const deepFlatten =
concatMap (x =>
Array.isArray (x) ? deepFlatten (x) : x)
// your sample data
const data =
[0, [1, [2, [3, [4, 5], 6]]], [7, [8]], 9]
console.log (flatten (data))
// [ 0, 1, [ 2, [ 3, [ 4, 5 ], 6 ] ], 7, [ 8 ], 9 ]
console.log (deepFlatten (data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
There. Now you have a tool for each job – one for squashing one level of nesting, flatten, and one for obliterating all nesting deepFlatten.
Maybe you can call it obliterate or nuke if you don't like the name deepFlatten.
Don't iterate twice !
Of course the above implementations are clever and concise, but using a .map followed by a call to .reduce means we're actually doing more iterations than necessary
Using a trusty combinator I'm calling mapReduce helps keep the iterations to a minium; it takes a mapping function m :: a -> b, a reducing function r :: (b,a) ->b and returns a new reducing function - this combinator is at the heart of transducers; if you're interested, I've written other answers about them
// mapReduce = (a -> b, (b,a) -> b, (b,a) -> b)
const mapReduce = (m,r) =>
(acc,x) => r (acc, m (x))
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs =>
xs.reduce (mapReduce (f, concat), [])
// concat :: ([a],[a]) -> [a]
const concat = (xs,ys) =>
xs.concat (ys)
// id :: a -> a
const id = x =>
x
// flatten :: [[a]] -> [a]
const flatten =
concatMap (id)
// deepFlatten :: [[a]] -> [a]
const deepFlatten =
concatMap (x =>
Array.isArray (x) ? deepFlatten (x) : x)
// your sample data
const data =
[ [ [ 1, 2 ],
[ 3, 4 ] ],
[ [ 5, 6 ],
[ 7, 8 ] ] ]
console.log (flatten (data))
// [ [ 1. 2 ], [ 3, 4 ], [ 5, 6 ], [ 7, 8 ] ]
console.log (deepFlatten (data))
// [ 1, 2, 3, 4, 5, 6, 7, 8 ]
To flatten an array of single element arrays, you don't need to import a library, a simple loop is both the simplest and most efficient solution :
for (var i = 0; i < a.length; i++) {
a[i] = a[i][0];
}
To downvoters: please read the question, don't downvote because it doesn't suit your very different problem. This solution is both the fastest and simplest for the asked question.
Another ECMAScript 6 solution in functional style:
Declare a function:
const flatten = arr => arr.reduce(
(a, b) => a.concat(Array.isArray(b) ? flatten(b) : b), []
);
and use it:
flatten( [1, [2,3], [4,[5,[6]]]] ) // -> [1,2,3,4,5,6]
const flatten = arr => arr.reduce(
(a, b) => a.concat(Array.isArray(b) ? flatten(b) : b), []
);
console.log( flatten([1, [2,3], [4,[5],[6,[7,8,9],10],11],[12],13]) )
Consider also a native function Array.prototype.flat() (proposal for ES6) available in last releases of modern browsers. Thanks to #(Константин Ван) and #(Mark Amery) mentioned it in the comments.
The flat function has one parameter, specifying the expected depth of array nesting, which equals 1 by default.
[1, 2, [3, 4]].flat(); // -> [1, 2, 3, 4]
[1, 2, [3, 4, [5, 6]]].flat(); // -> [1, 2, 3, 4, [5, 6]]
[1, 2, [3, 4, [5, 6]]].flat(2); // -> [1, 2, 3, 4, 5, 6]
[1, 2, [3, 4, [5, 6]]].flat(Infinity); // -> [1, 2, 3, 4, 5, 6]
let arr = [1, 2, [3, 4]];
console.log( arr.flat() );
arr = [1, 2, [3, 4, [5, 6]]];
console.log( arr.flat() );
console.log( arr.flat(1) );
console.log( arr.flat(2) );
console.log( arr.flat(Infinity) );
You can also try the new Array.flat() method. It works in the following manner:
let arr = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]].flat()
console.log(arr);
The flat() method creates a new array with all sub-array elements concatenated into it recursively up to the 1 layer of depth (i.e. arrays inside arrays)
If you want to also flatten out 3 dimensional or even higher dimensional arrays you simply call the flat method multiple times. For example (3 dimensions):
let arr = [1,2,[3,4,[5,6]]].flat().flat().flat();
console.log(arr);
Be careful!
Array.flat() method is relatively new. Older browsers like ie might not have implemented the method. If you want you code to work on all browsers you might have to transpile your JS to an older version. Check for MDN web docs for current browser compatibility.
A solution for the more general case, when you may have some non-array elements in your array.
function flattenArrayOfArrays(a, r){
if(!r){ r = []}
for(var i=0; i<a.length; i++){
if(a[i].constructor == Array){
flattenArrayOfArrays(a[i], r);
}else{
r.push(a[i]);
}
}
return r;
}
What about using reduce(callback[, initialValue]) method of JavaScript 1.8
list.reduce((p,n) => p.concat(n),[]);
Would do the job.
const common = arr.reduce((a, b) => [...a, ...b], [])
You can use Array.flat() with Infinity for any depth of nested array.
var arr = [ [1,2,3,4], [1,2,[1,2,3]], [1,2,3,4,5,[1,2,3,4,[1,2,3,4]]], [[1,2,3,4], [1,2,[1,2,3]], [1,2,3,4,5,[1,2,3,4,[1,2,3,4]]]] ];
let flatten = arr.flat(Infinity)
console.log(flatten)
check here for browser compatibility
Please note: When Function.prototype.apply ([].concat.apply([], arrays)) or the spread operator ([].concat(...arrays)) is used in order to flatten an array, both can cause stack overflows for large arrays, because every argument of a function is stored on the stack.
Here is a stack-safe implementation in functional style that weighs up the most important requirements against one another:
reusability
readability
conciseness
performance
// small, reusable auxiliary functions:
const foldl = f => acc => xs => xs.reduce(uncurry(f), acc); // aka reduce
const uncurry = f => (a, b) => f(a) (b);
const concat = xs => y => xs.concat(y);
// the actual function to flatten an array - a self-explanatory one-line:
const flatten = xs => foldl(concat) ([]) (xs);
// arbitrary array sizes (until the heap blows up :D)
const xs = [[1,2,3],[4,5,6],[7,8,9]];
console.log(flatten(xs));
// Deriving a recursive solution for deeply nested arrays is trivially now
// yet more small, reusable auxiliary functions:
const map = f => xs => xs.map(apply(f));
const apply = f => a => f(a);
const isArray = Array.isArray;
// the derived recursive function:
const flattenr = xs => flatten(map(x => isArray(x) ? flattenr(x) : x) (xs));
const ys = [1,[2,[3,[4,[5],6,],7],8],9];
console.log(flattenr(ys));
As soon as you get used to small arrow functions in curried form, function composition and higher order functions, this code reads like prose. Programming then merely consists of putting together small building blocks that always work as expected, because they don't contain any side effects.
ES6 One Line Flatten
See lodash flatten, underscore flatten (shallow true)
function flatten(arr) {
return arr.reduce((acc, e) => acc.concat(e), []);
}
or
function flatten(arr) {
return [].concat.apply([], arr);
}
Tested with
test('already flatted', () => {
expect(flatten([1, 2, 3, 4, 5])).toEqual([1, 2, 3, 4, 5]);
});
test('flats first level', () => {
expect(flatten([1, [2, [3, [4]], 5]])).toEqual([1, 2, [3, [4]], 5]);
});
ES6 One Line Deep Flatten
See lodash flattenDeep, underscore flatten
function flattenDeep(arr) {
return arr.reduce((acc, e) => Array.isArray(e) ? acc.concat(flattenDeep(e)) : acc.concat(e), []);
}
Tested with
test('already flatted', () => {
expect(flattenDeep([1, 2, 3, 4, 5])).toEqual([1, 2, 3, 4, 5]);
});
test('flats', () => {
expect(flattenDeep([1, [2, [3, [4]], 5]])).toEqual([1, 2, 3, 4, 5]);
});
Using the spread operator:
const input = [["$6"], ["$12"], ["$25"], ["$25"], ["$18"], ["$22"], ["$10"]];
const output = [].concat(...input);
console.log(output); // --> ["$6", "$12", "$25", "$25", "$18", "$22", "$10"]
I recommend a space-efficient generator function:
function* flatten(arr) {
if (!Array.isArray(arr)) yield arr;
else for (let el of arr) yield* flatten(el);
}
// Example:
console.log(...flatten([1,[2,[3,[4]]]])); // 1 2 3 4
If desired, create an array of flattened values as follows:
let flattened = [...flatten([1,[2,[3,[4]]]])]; // [1, 2, 3, 4]
If you only have arrays with 1 string element:
[["$6"], ["$12"], ["$25"], ["$25"]].join(',').split(',');
will do the job. Bt that specifically matches your code example.
I have done it using recursion and closures
function flatten(arr) {
var temp = [];
function recursiveFlatten(arr) {
for(var i = 0; i < arr.length; i++) {
if(Array.isArray(arr[i])) {
recursiveFlatten(arr[i]);
} else {
temp.push(arr[i]);
}
}
}
recursiveFlatten(arr);
return temp;
}
A Haskellesque approach
function flatArray([x,...xs]){
return x ? [...Array.isArray(x) ? flatArray(x) : [x], ...flatArray(xs)] : [];
}
var na = [[1,2],[3,[4,5]],[6,7,[[[8],9]]],10];
fa = flatArray(na);
console.log(fa);
ES6 way:
const flatten = arr => arr.reduce((acc, next) => acc.concat(Array.isArray(next) ? flatten(next) : next), [])
const a = [1, [2, [3, [4, [5]]]]]
console.log(flatten(a))
ES5 way for flatten function with ES3 fallback for N-times nested arrays:
var flatten = (function() {
if (!!Array.prototype.reduce && !!Array.isArray) {
return function(array) {
return array.reduce(function(prev, next) {
return prev.concat(Array.isArray(next) ? flatten(next) : next);
}, []);
};
} else {
return function(array) {
var arr = [];
var i = 0;
var len = array.length;
var target;
for (; i < len; i++) {
target = array[i];
arr = arr.concat(
(Object.prototype.toString.call(target) === '[object Array]') ? flatten(target) : target
);
}
return arr;
};
}
}());
var a = [1, [2, [3, [4, [5]]]]];
console.log(flatten(a));
if you use lodash, you can just use its flatten method: https://lodash.com/docs/4.17.14#flatten
The nice thing about lodash is that it also has methods to flatten the arrays:
i) recursively: https://lodash.com/docs/4.17.14#flattenDeep
ii) upto n levels of nesting: https://lodash.com/docs/4.17.14#flattenDepth
For example
const _ = require("lodash");
const pancake = _.flatten(array)
I was goofing with ES6 Generators the other day and wrote this gist. Which contains...
function flatten(arrayOfArrays=[]){
function* flatgen() {
for( let item of arrayOfArrays ) {
if ( Array.isArray( item )) {
yield* flatten(item)
} else {
yield item
}
}
}
return [...flatgen()];
}
var flatArray = flatten([[1, [4]],[2],[3]]);
console.log(flatArray);
Basically I'm creating a generator that loops over the original input array, if it finds an array it uses the yield* operator in combination with recursion to continually flatten the internal arrays. If the item is not an array it just yields the single item. Then using the ES6 Spread operator (aka splat operator) I flatten out the generator into a new array instance.
I haven't tested the performance of this, but I figure it is a nice simple example of using generators and the yield* operator.
But again, I was just goofing so I'm sure there are more performant ways to do this.
just the best solution without lodash
let flatten = arr => [].concat.apply([], arr.map(item => Array.isArray(item) ? flatten(item) : item))
I would rather transform the whole array, as-is, to a string, but unlike other answers, would do that using JSON.stringify and not use the toString() method, which produce an unwanted result.
With that JSON.stringify output, all that's left is to remove all brackets, wrap the result with start & ending brackets yet again, and serve the result with JSON.parse which brings the string back to "life".
Can handle infinite nested arrays without any speed costs.
Can rightly handle Array items which are strings containing commas.
var arr = ["abc",[[[6]]],["3,4"],"2"];
var s = "[" + JSON.stringify(arr).replace(/\[|]/g,'') +"]";
var flattened = JSON.parse(s);
console.log(flattened)
Only for multidimensional Array of Strings/Numbers (not Objects)
Ways for making flatten array
using Es6 flat()
using Es6 reduce()
using recursion
using string manipulation
[1,[2,[3,[4,[5,[6,7],8],9],10]]] - [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// using Es6 flat()
let arr = [1,[2,[3,[4,[5,[6,7],8],9],10]]]
console.log(arr.flat(Infinity))
// using Es6 reduce()
let flatIt = (array) => array.reduce(
(x, y) => x.concat(Array.isArray(y) ? flatIt(y) : y), []
)
console.log(flatIt(arr))
// using recursion
function myFlat(array) {
let flat = [].concat(...array);
return flat.some(Array.isArray) ? myFlat(flat) : flat;
}
console.log(myFlat(arr));
// using string manipulation
let strArr = arr.toString().split(',');
for(let i=0;i<strArr.length;i++)
strArr[i]=parseInt(strArr[i]);
console.log(strArr)
I think array.flat(Infinity) is a perfect solution. But flat function is a relatively new function and may not run in older versions of browsers. We can use recursive function for solving this.
const arr = ["A", ["B", [["B11", "B12", ["B131", "B132"]], "B2"]], "C", ["D", "E", "F", ["G", "H", "I"]]]
const flatArray = (arr) => {
const res = []
for (const item of arr) {
if (Array.isArray(item)) {
const subRes = flatArray(item)
res.push(...subRes)
} else {
res.push(item)
}
}
return res
}
console.log(flatArray(arr))

Intersection of two lists maintaining duplicate values in Kotlin

I want to find the number of common elements between two lists without eliminating duplicates.
For example:
input: [1, 3, 3] & [4, 3, 3]
output: 2, since the common elements are [3, 3]
input: [1, 2, 3] & [4, 3, 3]
output: 1, since the common elements are [3]
If I were to use the Kotlin collections intersect, the result is a set, which will prevent me from counting duplicate values.
I found (for Python) this, which handles duplicates differently and this, which led me to use this implementation, where a and b are the lists:
val aCounts = a.groupingBy { it }.eachCount()
val bCounts = b.groupingBy { it }.eachCount()
var intersectionCount = 0;
for ((k, v) in aCounts) {
intersectionCount += Math.min(v, bCounts.getOrDefault(k, 0))
}
However, being new to Kotlin I'm wondering if there's a more "Kotlin-y" way to do this--something taking advantage of all Kotlin's collections functionality? Maybe something that avoids explicitly iterating?
This:
val a = listOf(1, 2, 3, 3, 4, 5, 5, 5, 6)
val b = listOf(1, 3, 3, 3, 4, 4, 5, 6, 6, 7)
var counter = 0
a.intersect(b).forEach { x -> counter += listOf(a.count {it == x}, b.count {it == x}).min()!! }
println(counter)
will print
6
It uses the intersection of the 2 lists and by iterating through each of its items, adds to the counter the minimum number of occurrences of the item in both lists.
With this import:
import kotlin.math.min
you can avoid the creation of a list at each iteration and simplify to:
a.intersect(b).forEach { x-> counter += min(a.count {it == x}, b.count {it == x}) }
Courtesy of Arjan, a more elegant way to calculate the sum:
val result = a.intersect(b).map { x -> min(a.count {it == x}, b.count {it == x}) }.sum()
Get Common Elements from two or more arraylist
input
a = {1, 2, 2, 4, 5, 6}
b = {1, 2, 2, 4, 5, 6}
c = {1, 2, 2, 4, 6}
output = {1, 2, 2, 4, 6}
fun main() {
val array = ArrayList<ArrayList<String>>()
val arr1 = arrayListOf("1", "2", "2", "4", "5", "6")
val arr2 = arrayListOf("1", "2", "2", "4", "5", "6")
val arr3 = arrayListOf("1", "2", "2", "4", "6")
array.add(arr1)
array.add(arr2)
array.add(arr3)
println(getCommonElements(array)) }
Create a data class for storing arrayIndex and elementIndex
internal class IndexArray(val arrayIndex: Int,
val elementIndex: Int)
Algorithm for getting Common Elements
fun getCommonElements(arrayList: ArrayList<ArrayList<String>>): ArrayList<String> {
val commonElements = ArrayList<String>()
var isContain = true
val firstArray = arrayList[0]
val indexArray = ArrayList<IndexArray>()
// for loop for firstArray
for (e in firstArray) {
var elementIndex: Int
var arrayIndex: Int
// for loop for next ArrayList
for (i in 1 until arrayList.size) {
if (!arrayList[i].contains(e)) {
isContain = false
break
} else {
elementIndex = arrayList[i].indexOf(e)
arrayIndex = i
indexArray.add(IndexArray(arrayIndex, elementIndex))
}
}
if (isContain) {
commonElements.add(e)
// remove element
for (i in 0 until indexArray.size) {
arrayList[indexArray[i].arrayIndex].removeAt(indexArray[i].elementIndex)
}
indexArray.clear()
} else {
indexArray.clear()
isContain = true
}
}
return commonElements }

How best to find an element in nested lists?

Kotlin provides some usingful extension functions allow stream-like programming.
For example, if I look for an element in a list I can use find:
return list.find { n -> n>4 && n<6 }
But when I have a have nested lists this seems not practical for me. I have tu use forEach then -- luckyly I can return from an inner Lambda with Kotlin:
private fun findUsingForEach(data: List<List<Int>>, pred : (Int) -> Boolean) : Optional<Int> {
data.forEach { list ->
list.forEach { n ->
if( pred(n) ) return Optional.of(n)
}
}
return Optional.empty()
}
It seems fo me that forEach is not the right tool for that. Is there a more functional way to du this? filter comes to mind, but the nesting causes problems.
That follwing is the test I use for the function abouve:
#Test
open fun findTest() {
val data = listOf( listOf(1,2,3), listOf(3,4,5,6), listOf(), listOf(6,7,8) )
val e = findUsingForEach( data, { n -> n>4 && n < 6 } )
assertEquals(5, e.get())
}
You could flatten the list:
fun <T> Iterable<Iterable<T>>.flatten(): List<T> (source)
Returns a single list of all elements from all collections in the given collection.
val data = listOf(listOf(1, 2, 3), listOf(3, 4, 5, 6), listOf(), listOf(6, 7, 8))
data.flatten().find { n -> n > 4 && n < 6 }
This will return a single list with the elements of the sublists in order. Then you can use find as usual.
In your example,
{{1, 2, 3}, {3, 4, 5, 6}, {}, {6, 7, 8}}
becomes
{1, 2, 3, 3, 4, 5, 6, 6, 7, 8}
and the result of find on this list is 5.
However, this will create a new list. Take a look at the source of flatten:
/**
* Returns a single list of all elements from all collections in the given collection.
*/
public fun <T> Iterable<Iterable<T>>.flatten(): List<T> {
val result = ArrayList<T>()
for (element in this) {
result.addAll(element)
}
return result
}
If you want to save memory, create a Sequence from your list first:
data.asSequence()
and then perform your operations on this sequence:
data.asSequence().flatten().find { n -> n > 4 && n < 6 }
Side note: your predicate, n > 4 && n < 6, is simply equivalent to n == 5.
If you just want to reduce codes and you don't care much about efficiency, try this.
list.flatten().find { your pred here }
Or
list.flatMap { it }.find { your pred }
Or create a useful utility which doesn't create new lists (faster/lower memory taken):
inline fun <T> Iterable<Iterable<T>>.forEachEach(f: (T) -> Unit) =
forEach { it.forEach(f) }

Resources