Elixir has the possibility to pipe input into a function, which makes code more readable very often.
For example something like this
sentence |> String.split(#wordSplitter, trim: true)
which pipes the String sentence into the first argument of String.split.
Now consider I would also like to pipe the second argument into String.split. Is there a possibility to do that in Elixir? I mean something like this:
sentence, #wordSplitter |> String.split(trim: true)
Thanks!
As #Dogbert pointed out, this is impossible out of the box. The helper is pretty straightforward, though:
defmodule MultiApplier do
def pipe(params, mod, fun, args \\ []) do
apply(mod, fun, List.foldr(params, args, &List.insert_at(&2, 0, &1)))
end
end
iex> ["a b c", " "]
...> |> MultiApplier.pipe(String, :split, [[trim: true]])
#⇒ ["a", "b", "c"]
iex> ["a b c", " ", [trim: true]]
...> |> MultiApplier.pipe(String, :split, [])
#⇒ ["a", "b", "c"]
iex> ["a b c"]
...> |> MultiApplier.pipe(String, :split, [" ", [trim: true]])
#⇒ ["a", "b", "c"]
Related
I'm learning Elixir and I'm having a difficulty with this simple problem:
I have a list of values:
my_list = ["a", "b", "c", "y", "z", "a", "e"]
And I have a map:
my_map = %{"a" => -1, "b" => 0, "c" => 1, "d" => 2, "e" => 3}
I want to loop through my_list, find all key occurrences in my_map and sum the values from my_map if the occurrence happened.
In the above example it should return 2 because:
-1 + 0 + 1 + (ignored) + (ignored) - 1 + 3
# => 2
This is a very easy thing to do in languages with mutable variables (we can loop the list and add a counter). I'm working on changing my mindset.
Thank you for help!
I'm working on changing my mindset.
Admirable. You'll find great success in Elixir if you're willing to change your mindset and try to think more functionally. With that in mind, let's break the problem down.
I want to loop through my_list
More precisely, you want to do something to each element of the list and get a list of the results. That's Enum.map/2.
Enum.map(my_list, fn x -> ...)
Now, ... needs to be replaced with what we want to do to each list element. We want to get the corresponding map elements, ignoring those that are not present. Since we're taking a sum, "ignoring" really just means "replacing with zero". Map.get/3 can get a value from a map, using a default if not provided.
Enum.map(my_list, fn x -> Map.get(my_map, x, 0) end)
Now, we have a list of numbers. We just want the sum. That could be done in terms of Enum.reduce/3, but summing is a common enough task that it has its own function: Enum.sum/1.
Enum.sum(Enum.map(my_list, fn x -> Map.get(my_map, x, 0) end))
Finally, this reads backwards. It says "sum the result of mapping over the list", when it would read much cleaner as "take the list, get the elements from the map, then take a sum". We can clean it up with the pipe operator. The following is equivalent to the above.
my_list |> Enum.map(fn x -> Map.get(my_map, x, 0) end) |> Enum.sum
This is a nice use case for a comprehension:
for key <- my_list, val = my_map[key], reduce: 0 do
acc -> acc + val
end
Here the val = my_map[key] is a filter. When key is not in my_map, the result will be nil, which is a falsy value so is skipped.
While both answers given here are perfectly valid, I’m to post another one using plain recursion, for the sake of completeness.
defmodule Summer do
def consume(list, map, acc \\ 0) # head
def consume([], _, acc), do: acc # exhausted
def consume([h | t], map, acc) when is_map_key(map, h),
do: consume(t, map, acc + Map.fetch!(map, h))
def consume([_h | t], map, acc), do: consume(t, map, acc)
end
Summer.consume my_list, my_map
#⇒ 2
I have a dataframe with a column text that a is a list of strings, like this:
text
["text1","text2"]
["text3","text4"]
How can I clean de string to have another column text_clean like this:
text
text1,text2
text3,text4
When I type in repl df I get:
text
String
["string"]
["string","anotherestring"]
but when I type:
df[!,:text]
I get:
"[\"string\"]"
"[\"string\",\anotherestring\"]"
I would like to create a new dolumn, called text_clean:
string
string, anotherstring
Thanks
julia> a = [["text", "text2"], ["text"], ["text", "text2", "text", "text2"]]
3-element Vector{Vector{String}}:
["text", "text2"]
["text"]
["text", "text2", "text", "text2"]
julia> join.(a, ",")
3-element Vector{String}:
"text,text2"
"text"
"text,text2,text,text2"
replace a with your column, like df.text
It seems that your strings literally have values containing [s, "s, etc.
First of all, make sure that this is intended. For eg., you might have something like a vec = ["string", "anotherstring"]. At some point before this, you might have code doing the equivalent of df[1, :text] = string(vec). Instead, do df[1, :text] = join(vec, ", ") when assigning to the text column, to have that original column itself be clean.
If the above doesn't apply, and you have to deal with the column as given, then you create your new cleaned column like this:
julia> df = DataFrame(:text => [string(["hello", "world"]), string(["this","is","SPARTA"])])
2×1 DataFrame
Row │ text
│ String
─────┼──────────────────────────
1 │ ["hello", "world"]
2 │ ["this", "is", "SPARTA"]
julia> df[!, :text_clean] = map(df.text) do str
str |>
s -> strip(s, ('[', ']')) |> #remove [ ]
s -> strip.(split(s, ", "), '"') |> # remove inner "
sv -> join(sv, ", ")
end
2-element Vector{String}:
"hello, world"
"this, is, SPARTA"
(You might have to adjust the second argument to split above based on whether or not you have a space after the commas in the text column.)
Or, making use of Julia's own syntax parsing,
julia> df[!, :text_clean] = map(df.text) do str
str |> Meta.parse |>
ex -> ex.head == :vect && eval(ex) |>
sv -> join(sv, ", ")
end
2-element Vector{String}:
"hello, world"
"this, is, SPARTA"
(The ex.head == :vect is a basic sanity check to make sure that the string is in the format you expect, and not anything malicious, before evaluating it.)
I have trouble understanding why the below doesn't work?
(10, 10) |> ((a,b) -> a + b)
The actual use case is alot more complicated, but I am hoping to understand this simple pattern in Julia first.
((a,b) -> a + b) is a function of two arguments, while the tuple (10, 10) is just one value. Alternatively to splatting, as #Gnimuc proposes, you could unpack the argument in the lambda:
julia> (10, 10) |> (((a,b),) -> a + b)
20
But I find the extra comma a bit ugly, to be honest.
the pipelining operator only supports single argument chaining:
"""
|>(x, f)
Applies a function to the preceding argument. This allows for easy function chaining.
# Examples
```jldoctest
julia> [1:5;] |> x->x.^2 |> sum |> inv
0.01818181818181818
```
"""
|>(x, f) = f(x)
(10, 10) |> ((a,b) -> a + b) is equalent to ((a,b) -> a + b)((10, 10)) which is obviously illegal in Julia because it's missing a splatting operator ((a,b) -> a + b)((10, 10)...) and hence:
julia> (10, 10) |> x->((a,b) -> a + b)(x...)
20
Apologies for a rookie question. I'm trying to change my mental paradigm from procedural to functional.
For instance, suppose I have a list of names that I want to print like this "John, Paul, George, and Ringo." But this code does not satisfy:
let names = [ "John"; "Paul"; "George"; "Ringo" ]
names |> Seq.iter (fun s -> printf "%s, " s)
My procedural instinct is to seek a way to insinuate a predicate into that lambda so that it can branch between ", " or ", and " or ". " depending upon where we're at iterating the sequence. I think that's wrong, but I'm feeling around for what's right.
Would it be better to split the sequence in parts?
In this case it seems that we want to split the sequence into parts corresponding to distinct delimiter behaviors. We want to split it at the end, so we can't use Seq. But we can use List.splitAt instead.
let start, ending = List.splitAt (names.Length - 1) names
let penultimate, last = List.splitAt 1 ending
start |> Seq.iter (fun s -> printf "%s, " s)
penultimate |> Seq.iter (fun s -> printf "%s, and " s)
last |> Seq.iter (fun s -> printf "%s. " s)
Is this a righteous approach? Is there a better solution I've overlooked? Am I thinking along the right lines?
The general approach I take to tackle these kind of problems is to split them into smaller parts and solve individually:
an empty list [] results in ""
one element ["a"] results in "a."
two elements [ "a"; "b" ] result in "a and b."
more elements (that is a :: rest) result in "a, " + takeCareOf rest, where takeCareOf follows above rules. Note that we don't need to know the length of the full list.
Above recipe directly translates to F# (and functional languages in general):
let rec commaAndDot' = function
| [] -> ()
| [ a ] -> printfn "%s." a
| a :: [ b ] -> printfn "%s and %s." a b
| a :: rest -> printf "%s, " a; commaAndDot' rest
Are we done yet? No, commaAndDot' violates the Single Responsibility Principle because the function implements our 'business logic' and prints to the console. Let's fix that:
let rec commaAndDot'' = function
| [] -> ""
| [ a ] -> sprintf "%s." a
| a :: [ b ] -> sprintf "%s and %s." a b
| a :: rest -> sprintf "%s, " a + commaAndDot'' rest
As an additional benefit we can now call the function in parallel and the output does not get mixed up anymore.
Are we done yet? No, above function is not tail-recursive (we need to compute commaAndDot'' rest before concatenating it to the current result) and would blow the stack for large lists. A standard approach to fixing this is to introduce an accumulator acc:
let commaAndDot''' words =
let rec helper acc = function
| [] -> acc
| [ a ] -> sprintf "%s%s." acc a
| a :: [ b ] -> sprintf "%s%s and %s." acc a b
| a :: rest -> helper (acc + sprintf "%s, " a) rest
helper "" words
Are we done yet? No, commaAndDot''' creates a lot of strings for intermediate results. Thanks to F# not being a pure language, we can leverage local (private, non-observable) mutation to optimize for memory and speed:
let commaAndDot words =
let sb = System.Text.StringBuilder()
let rec helper = function
| [] -> sb
| [ a ] -> sprintf "%s." a |> sb.Append
| a :: [ b ] -> sprintf "%s and %s." a b |> sb.Append
| a :: rest ->
sprintf "%s, " a |> sb.Append |> ignore
helper rest
helper words |> string
Are we done yet? Probably... at least this is something I would consider idiomatic F# and happily commit. For optimising further (e.g. Appending commas and dots separately or changing the order of the patterns) I'd first write micro-benchmarks before sacrificing readability.
All versions generate the same output:
commaAndDot [] // ""
commaAndDot [ "foo" ] // "foo."
commaAndDot [ "foo"; "bar" ] // "foo and bar."
commaAndDot [ "Hello"; "World"; "F#" ] // "Hello, World and F#."
Update: SCNR, created a benchmark... results are below as a HTML snippet (for nice tabular data).
BuilderOpt is the StringBuilder version with the [] case moved to the bottom,
BuilderChained is with chained Append calls, e.g. sb.Append(a).Append(" and ").Append(b) and BuilderFormat is e.g. sb.AppendFormat("{0} and {1}", a, b). Full source code available.
As expected, 'simpler' versions perform better for small lists, the larger the list the better BuilderChained. Concat performs better than I expected but does not produce the right output (missing ".", lacking one case). Yield gets rather slow...
<!DOCTYPE html>
<html lang='en'>
<head>
<meta charset='utf-8' />
<title>Benchmark.CommaAndDot</title>
<style type="text/css">
table { border-collapse: collapse; display: block; width: 100%; overflow: auto; }
td, th { padding: 6px 13px; border: 1px solid #ddd; }
tr { background-color: #fff; border-top: 1px solid #ccc; }
tr:nth-child(even) { background: #f8f8f8; }
</style>
</head>
<body>
<pre><code>
BenchmarkDotNet=v0.11.1, OS=Windows 10.0.16299.726 (1709/FallCreatorsUpdate/Redstone3)
Intel Core i7 CPU 950 3.07GHz (Nehalem), 1 CPU, 8 logical and 4 physical cores
Frequency=2998521 Hz, Resolution=333.4977 ns, Timer=TSC
[Host] : .NET Framework 4.7.2 (CLR 4.0.30319.42000), 64bit LegacyJIT-v4.7.3190.0 DEBUG
DefaultJob : .NET Framework 4.7.2 (CLR 4.0.30319.42000), 64bit RyuJIT-v4.7.3190.0
</code></pre>
<pre><code></code></pre>
<table>
<thead><tr><th> Method</th><th>Verbosity</th><th> Mean</th><th>Error</th><th>StdDev</th><th> Median</th><th>Scaled</th><th>ScaledSD</th>
</tr>
</thead><tbody><tr><td>Concat</td><td>0</td><td>39.905 ns</td><td>0.0592 ns</td><td>0.0494 ns</td><td>39.906 ns</td><td>1.02</td><td>0.11</td>
</tr><tr><td>Yield</td><td>0</td><td>27.235 ns</td><td>0.0772 ns</td><td>0.0603 ns</td><td>27.227 ns</td><td>0.69</td><td>0.07</td>
</tr><tr><td>Accumulator</td><td>0</td><td>1.956 ns</td><td>0.0109 ns</td><td>0.0096 ns</td><td>1.954 ns</td><td>0.05</td><td>0.01</td>
</tr><tr><td>Builder</td><td>0</td><td>32.384 ns</td><td>0.2986 ns</td><td>0.2331 ns</td><td>32.317 ns</td><td>0.82</td><td>0.09</td>
</tr><tr><td>BuilderOpt</td><td>0</td><td>33.664 ns</td><td>1.0371 ns</td><td>0.9194 ns</td><td>33.402 ns</td><td>0.86</td><td>0.09</td>
</tr><tr><td>BuilderChained</td><td>0</td><td>39.671 ns</td><td>1.2097 ns</td><td>3.5669 ns</td><td>41.339 ns</td><td>1.00</td><td>0.00</td>
</tr><tr><td>BuilderFormat</td><td>0</td><td>40.276 ns</td><td>0.8909 ns</td><td>1.8792 ns</td><td>39.494 ns</td><td>1.02</td><td>0.12</td>
</tr><tr><td>Concat</td><td>1</td><td>153.116 ns</td><td>1.1592 ns</td><td>0.9050 ns</td><td>152.706 ns</td><td>0.87</td><td>0.01</td>
</tr><tr><td>Yield</td><td>1</td><td>154.522 ns</td><td>0.2890 ns</td><td>0.2256 ns</td><td>154.479 ns</td><td>0.88</td><td>0.00</td>
</tr><tr><td>Accumulator</td><td>1</td><td>223.342 ns</td><td>0.3678 ns</td><td>0.2872 ns</td><td>223.412 ns</td><td>1.27</td><td>0.00</td>
</tr><tr><td>Builder</td><td>1</td><td>232.194 ns</td><td>0.2951 ns</td><td>0.2465 ns</td><td>232.265 ns</td><td>1.32</td><td>0.00</td>
</tr><tr><td>BuilderOpt</td><td>1</td><td>232.016 ns</td><td>0.5654 ns</td><td>0.4722 ns</td><td>232.170 ns</td><td>1.31</td><td>0.00</td>
</tr><tr><td>BuilderChained</td><td>1</td><td>176.473 ns</td><td>0.3918 ns</td><td>0.3272 ns</td><td>176.341 ns</td><td>1.00</td><td>0.00</td>
</tr><tr><td>BuilderFormat</td><td>1</td><td>219.262 ns</td><td>6.7995 ns</td><td>6.3603 ns</td><td>217.003 ns</td><td>1.24</td><td>0.03</td>
</tr><tr><td>Concat</td><td>10</td><td>1,284.042 ns</td><td>1.7035 ns</td><td>1.4225 ns</td><td>1,283.443 ns</td><td>1.68</td><td>0.05</td>
</tr><tr><td>Yield</td><td>10</td><td>6,532.667 ns</td><td>12.6169 ns</td><td>10.5357 ns</td><td>6,533.504 ns</td><td>8.55</td><td>0.24</td>
</tr><tr><td>Accumulator</td><td>10</td><td>2,701.483 ns</td><td>4.8509 ns</td><td>4.5376 ns</td><td>2,700.208 ns</td><td>3.54</td><td>0.10</td>
</tr><tr><td>Builder</td><td>10</td><td>1,865.668 ns</td><td>5.0275 ns</td><td>3.9252 ns</td><td>1,866.920 ns</td><td>2.44</td><td>0.07</td>
</tr><tr><td>BuilderOpt</td><td>10</td><td>1,820.402 ns</td><td>2.7853 ns</td><td>2.3258 ns</td><td>1,820.464 ns</td><td>2.38</td><td>0.07</td>
</tr><tr><td>BuilderChained</td><td>10</td><td>764.334 ns</td><td>19.8528 ns</td><td>23.6334 ns</td><td>756.988 ns</td><td>1.00</td><td>0.00</td>
</tr><tr><td>BuilderFormat</td><td>10</td><td>1,177.186 ns</td><td>1.9584 ns</td><td>1.6354 ns</td><td>1,177.897 ns</td><td>1.54</td><td>0.04</td>
</tr><tr><td>Concat</td><td>100</td><td>25,579.773 ns</td><td>824.1504 ns</td><td>688.2028 ns</td><td>25,288.873 ns</td><td>5.33</td><td>0.14</td>
</tr><tr><td>Yield</td><td>100</td><td>421,872.560 ns</td><td>902.5023 ns</td><td>753.6302 ns</td><td>421,782.071 ns</td><td>87.87</td><td>0.23</td>
</tr><tr><td>Accumulator</td><td>100</td><td>80,579.168 ns</td><td>227.7392 ns</td><td>177.8038 ns</td><td>80,547.868 ns</td><td>16.78</td><td>0.05</td>
</tr><tr><td>Builder</td><td>100</td><td>15,047.790 ns</td><td>26.2248 ns</td><td>21.8989 ns</td><td>15,048.903 ns</td><td>3.13</td><td>0.01</td>
</tr><tr><td>BuilderOpt</td><td>100</td><td>15,287.117 ns</td><td>39.8679 ns</td><td>31.1262 ns</td><td>15,293.739 ns</td><td>3.18</td><td>0.01</td>
</tr><tr><td>BuilderChained</td><td>100</td><td>4,800.966 ns</td><td>11.3614 ns</td><td>10.0716 ns</td><td>4,801.450 ns</td><td>1.00</td><td>0.00</td>
</tr><tr><td>BuilderFormat</td><td>100</td><td>8,382.896 ns</td><td>87.8963 ns</td><td>68.6236 ns</td><td>8,368.400 ns</td><td>1.75</td><td>0.01</td>
</tr></tbody></table>
</body>
</html>
I prefer using String.concat:
let names = [ "John"; "Paul"; "George"; "Ringo" ]
names
|> List.mapi (fun i n -> if i = names.Length - 1 && i > 0 then "and " + n else n)
|> String.concat ", "
|> printfn "%s"
Basic techniques are mentioned in the accepted answer: problem deconstruction and separation of concerns. There is either no element, or there is an element followed by either ., , and, or ,, depending on its position relative to the end of the input sequence.
Assuming that the input is of type string list, this can be fairly well expressed by a recursive, pattern matching function definition, wrapped inside a list sequence expression to ensure tail recursion. The match does nothing if the input is empty, so it returns an empty list; it returns a sub-list for the other terminating case, otherwise it appends to the sub-list the results of the recursion.
The concatenation as the desired target type string is a separate, final step, as proposed in another answer.
let rec seriesComma xs = [
match xs with
| [] -> ()
| [x] -> yield! [x; "."]
| x::[y] -> yield! [x; ", and "]; yield! seriesComma [y]
| x::xs -> yield! [x; ", "]; yield! seriesComma xs ]
["Chico"; "Harpo"; "Groucho"; "Gummo"; "Zeppo"]
|> seriesComma |> String.concat ""
// val it : string = "Chico, Harpo, Groucho, Gummo, and Zeppo."
Seq.Reduce is the simplest way to make a delimited list, but including the "and" before the last item adds some complexity. Below I show a way to do it in two steps, but the recursive approach in the accepted answer is probably more true to the Functional Programming paradigm.
let names = [ "John"; "Paul"; "George"; "Ringo" ]
let delimitedNames = names |> Seq.reduce (fun x y -> sprintf "%s, %s" x y)
let replaceLastOccurrence (hayStack: string) (needle: string) (newNeedle: string) =
let idx = hayStack.LastIndexOf needle
match idx with
| -1 -> hayStack
| _ -> hayStack.Remove(idx, needle.Length).Insert(idx, newNeedle)
replaceLastOccurrence delimitedNames "," ", and"
See https://msdn.microsoft.com/en-us/visualfsharpdocs/conceptual/seq.reduce%5B%27t%5D-function-%5Bfsharp%5D?f=255&MSPPError=-2147217396
Well, a more functional-looking solution could be something like this:
let names = [ "John"; "Paul"; "George"; "Ringo" ]
names
|> Seq.tailBack
|> Seq.iter (fun s -> printf "%s, " s)
names
|> Seq.last
|> fun s -> printf "and %s" s
Where tailBack can be defined in some SequenceExtensions.fs like
module Seq
let tailBack seq =
seq
|> Seq.rev
|> Seq.tail
|> Seq.rev
This way you do not deal much with indexes, variables and all that procedural stuff.
Ideally you would leverage options here, like
names
|> Seq.tryLast
|> Option.iter (fun s -> printf "and %s" s)
With this you would also avoid possible argument exceptions. But options in functional programming is another (nice) concept than sequences.
Also, here a particular task matters. I believe this solution is quite inefficient - we iterate the sequence too many times. Maybe in some cases fussing with indexes will be the way to go.
Melt and Cast are popular operations to handle data in R.
In F# that would be sequences of records of same type or something close to it.
Are you aware of any such functions in F#?
(If not, who would be interested in making some strongly typed version of them...)
More information:
Melt takes a table as input.
It has column title (our record fields), and a series of rows.
Those column can be grouped into a set of 'identifier' and a set of 'variables'
Melt puts this table in a new canonical form with the columns being now :
the identifiers, the column named #"variable", the column named #"value"
If you had 10 'variables' originally, like size, weight, etc.. you will have for each previous record, 10 records in the canonical form, with the values in the column #'variable' being filled with the title of the previous columns from your 'variables'
Cast, conversely, reconstruct a table from a melted one.
A short example in R, melt takes data (dat) that looks like this:
a b c
1 1 0.48411551 0.2372291
2 2 0.58850308 0.3968759
3 3 0.74412592 0.9718320
4 4 0.93060118 0.8665092
5 5 0.01556804 0.2512399
and makes it look like this:
> melt(dat,id.vars = "a")
a variable value
1 1 b 0.48411551
2 2 b 0.58850308
3 3 b 0.74412592
4 4 b 0.93060118
5 5 b 0.01556804
6 1 c 0.23722911
7 2 c 0.39687586
8 3 c 0.97183200
9 4 c 0.86650918
10 5 c 0.25123992
cast essentially does the reverse.
Those 2 operations are extremely powerful on a day to day basis.
Once you have them it changes your thinking, very much like FP does.
Assuming melt is similar to SQL Server's unpivot, this ought to do the trick:
let melt keys (table: DataTable) =
let out = new DataTable()
let keyCols, otherCols =
table.Columns
|> Seq.cast<DataColumn>
|> Seq.toArray
|> Array.partition (fun c -> keys |> Seq.exists (fun k -> k = c.ColumnName))
for c in keyCols do
out.Columns.Add(c.ColumnName) |> ignore
out.Columns.Add("Key", typeof<string>) |> ignore
out.Columns.Add("Value") |> ignore
for r in table.Rows do
for c in otherCols do
let values = [|
for c in keyCols do yield r.[c]
yield box c.ColumnName
yield r.[c]
|]
out.Rows.Add(values) |> ignore
out
Here's a little test to try it out:
let table = new DataTable()
[|"Country", typeof<string>
"2001", typeof<int>
"2002", typeof<int>
"2003", typeof<int>|]
|> Array.map (fun (name, typ) -> new DataColumn(name, typ))
|> table.Columns.AddRange
[
"Nigeria", 1, 2, 3
"UK", 2, 3, 4
]
|> List.iter (fun (a, b, c, d) -> table.Rows.Add(a, b, c, d) |> ignore)
let table2 = table |> melt ["Country"]
table2.Rows
|> Seq.cast<DataRow>
|> Seq.iter (fun r ->
for (c: DataColumn) in table2.Columns do
printfn "%A: %A" c.ColumnName r.[c]
printfn "")
which yields
"Country": "Nigeria"
"Key": "2001"
"Value": "1"
"Country": "Nigeria"
"Key": "2002"
"Value": "2"
...
Assuming cast goes the other way (i.e. pivot), you should be able to take this code and come up with a translation.
If you're doing this a lot, you might find it easier to load your data into SQL Server and use the built-in operators.
Are you aware of any such functions in F#?
There are no such functions in the F# standard library.
A short example in R
Your example data may be written in F# like this:
let header, data =
[ "a"; "b"; "c" ],
[ 1, 0.48411551, 0.2372291
2, 0.58850308, 0.3968759
3, 0.74412592, 0.9718320
4, 0.93060118, 0.8665092
5, 0.01556804, 0.2512399 ]
and then "melted" like this:
let melt header data =
let header, data = Array.ofSeq header, Array.ofSeq data
[ header.[0], "variable", "value" ],
[ for a, b, c in data do
yield a, "b", b
yield a, "c", c ]
Note that static typing requires that your "b" and "c" columns contain values of the same type because they have been merged into a single column.
Those 2 operations are extremely powerful on a day to day basis. Once you have them it changes your thinking, very much like FP does.
I don't understand why. I suspect this is an XY problem and you are describing how problems can be solved in R when the same problem would be better solved using a more typeful approach in F# such as a map from "a" to a map from "variable" to "value" but without any idea of what anyone might want these functions for I cannot be sure.