Based on THIS question, I realized that calculating such numbers seems not possible in regular ways.
Any suggestions?
It is possible, but you need an algorithm that is a bit more clever than the naive solution. If you write the naive power function, you do something along the lines of:
pow(_, 0) -> 1;
pow(A, 1) -> A;
pow(A, N) -> A * pow(A, N-1).
which just unrolls the power function. But the problem is that in your case, that will be 262144 multiplications, on increasingly larger numbers. The trick is a pretty simple insight: if you divide N by 2, and square A, you almost have the right answer, except if N is odd. So if we add a fixing term for the odd case, we obtain:
-module(z).
-compile(export_all).
pow(_, 0) -> 1;
pow(A, 1) -> A;
pow(A, N) ->
B = pow(A, N div 2),
B * B * (case N rem 2 of 0 -> 1; 1 -> A end).
This completes almost instantly on my machine:
2> element(1, timer:tc(fun() -> z:pow(5, 262144) end)).
85568
of course, if doing many operations, 85ms is hardly acceptable. But computing this is actually rather fast.
(if you want more information, take a look at: https://en.wikipedia.org/wiki/Exponentiation_by_squaring )
If you are interested how compute power using same algorithm as in I GIVE CRAP ANSWERS's solution but in tail recursive code, there it is:
power(X, 0) when is_integer(X) -> 1;
power(X, Y) when is_integer(X), is_integer(Y), Y > 0 ->
Bits = bits(Y, []),
power(X, Bits, X).
power(_, [], Acc) -> Acc;
power(X, [0|Bits], Acc) -> power(X, Bits, Acc*Acc);
power(X, [1|Bits], Acc) -> power(X, Bits, Acc*Acc*X).
bits(1, Acc) -> Acc;
bits(Y, Acc) ->
bits(Y div 2, [Y rem 2 | Acc]).
It simple since Erlang uses arbitrary-precision for integers(big numbers) you can define own function pow for integer, for example:
-module(test).
-export([int_pow/2]).
int_pow(N,M)->int_pow(N,M,1).
int_pow(_,0,R) -> R;
int_pow(N,M,R) -> int_pow(N,M-1,R*N).
Note, I did not check the arguments and showed the implementation for your example.
You can do:
defmodule Pow do
def powa(x, n), do: powa(x, n, 1)
def powa(_, 0, acc), do: acc
def powa(x, n, acc), do: powa(x, n-1, acc * x)
end
Apparently
Pow.powa(5, 262144) |> to_string |> String.length
yields
183231
long number that you were curious about.
Related
Haskell replaces for loops over iteratable objects with map :: (a -> b) -> [a] -> [b] or
fmap :: (a -> b) -> f a -> f b. (This question isn't limited to Haskell, I'm just using the syntax here.)
Is there something similar that replaces a while loop, like
wmap :: ([a] -> b) -> [a] -> ([b] -> Bool) -> [b]?
This function returns a list of b.
The first argument is a function that takes a list and computes a value that will end up in the list returned by wmap (so it's a very specific kind of while loop).
The second argument is the list that we use as our starting point.
The third argument is a function that evaluates the stoping criteria.
And as a functor,
wfmap :: (f a -> b) -> f a -> (f b -> Bool) -> f b
For example, a Jacobi solver would look like this (with b now the same type as a):
jacobi :: ([a] -> [a]) -> [a] -> ([a] -> Bool) -> [a]
What I'm looking for isn't really pure. wmap could have values that mutate internally, but only exist inside the function. It also has nondeterministic runtime, if it terminates at all.
In the case of a Gauss-Seidel solver, there would be no return value, since the [a] would be modified in place.
Something like this:
gs :: ([a] -> [a]) -> [a] -> ([a] -> Bool) -> ???
Does wmap or wfmap exist as part of any language by default, and what is it called?
Answer 1 (thanks to Bergi): Instead of the silly wmap/wfmap signature, we already have until.
Does an in place version of until exist for things like gs?
There is a proverb in engineering which states "Don't generalize before you have at least 3 implementations". There is some truth to it - especially when looking for new functional iteration concepts before doing it by foot a few times.
"Doing it by foot" here means, you should - if there is no friendly helper function you know of - resort to recursion. Write your "special cases" recursively. Preferably in a tail recursive form. Then, if you start to see recurring patterns, you might come up with a way to refactor into some recurring iteration scheme and its "kernel".
Let's for the sake of clarification of the above, assume you never heard of foldl and you want accumulate a result from iteration over a list... Then, you would write something like:
myAvg values =
total / (length values)
where
mySum acc [] = acc
mySum acc (x:xs) = mySum (acc + x) xs
total = mySum 0 values
And after doing this a couple of times, the pattern might show, that the recursions in those where clauses always look darn similar. You might then come up with a name like "fold" or "reduce" for that inner recursion snippet and end up with:
myAvg values = (foldl (+) 0.0 values) / fromIntegral (length values) :: Float
So, if you are looking for helper functions which help with your use-cases, my advice is you first write a few instances as recursive functions and then look for patterns.
So, with all that said, let's get our fingers wet and see how the Jacobi algorithm could translate to Haskell. Just so we have something to talk about. Now - usually I do not use Haskell for anything requiring arrays (containers with O(1) element access), because there are at least 5 array packages I know of and I would have to read for 2 days to decide which one is suitable for my application. TL;DR;). So I stick with lists and NO package dependencies beyond prelude in the code below. But that is - given the size of the example equations we try to solve is tiny - not a bad thing at all. Plus, the code demonstrates, that list comprehensions in lazy Haskell allow for un-imperative and yet performant operations on sets of cells (e.g. in the matrix), without any need for explicit looping.
type Matrix = [[Double]]
-- sorry - my mind went blank while looking for a better name for this...
-- but it is useful nonetheless
idefix nr nc =
[ [(r,c) | c <- [0..nc-1]] | r <- [0..nr-1]]
matElem m (r,c) = (m !! r) !! c
transpose (r,c) = (c,r)
matrixDim m = (length m, length . head $ m)
-- constructs a Matrix by enumerating the indices and querying
-- 'unfolder' for a value.
-- try "unfoldMatrix 3 3 id" and you see how indices relate to
-- cells in the matrix.
unfoldMatrix nr nc unfolder =
fmap (\row -> fmap (\cell -> unfolder cell) row) $ idefix nr nc
-- Not really needed for Jacobi problem but good
-- training to get our fingers wet with unfoldMatrix.
transposeMatrix m =
let (nr,nc) = matrixDim m in
unfoldMatrix nc nr (matElem m . transpose)
addMatrix m1 m2
| (matrixDim m1) == (matrixDim m2) =
let (nr,nc) = matrixDim m1 in
unfoldMatrix nr nc (\idx -> matElem m1 idx + matElem m2 idx)
subMatrix m1 m2
| (matrixDim m1) == (matrixDim m2) =
let (nr,nc) = matrixDim m1 in
unfoldMatrix nr nc (\idx -> matElem m1 idx - matElem m2 idx)
dluMatrix :: Matrix -> (Matrix,Matrix,Matrix)
dluMatrix m
| (fst . matrixDim $ m) == (snd . matrixDim $ m) =
let n = fst . matrixDim $ m in
(unfoldMatrix n n (\(r,c) -> if r == c then matElem m (r,c) else 0.0)
,unfoldMatrix n n (\(r,c) -> if r > c then matElem m (r,c) else 0.0)
,unfoldMatrix n n (\(r,c) -> if c > r then matElem m (r,c) else 0.0)
)
mulMatrix m1 m2
| (snd . matrixDim $ m1) == (fst . matrixDim $ m2) =
let (nr, nc) = ((fst . matrixDim $ m1),(snd . matrixDim $ m2)) in
unfoldMatrix nr nc
(\(ro,co) ->
sum [ matElem m1 (ro,i) * matElem m2 (i,co) | i <- [0..nr-1]]
)
isSquareMatrix m = let (nr,nc) = matrixDim m in nr == nc
jacobi :: Double -> Matrix -> Matrix -> Matrix -> Matrix
jacobi errMax a b x0
| isSquareMatrix a && (snd . matrixDim $ a) == (fst . matrixDim $ b) =
approximate x0
-- We could possibly avoid our hand rolled recursion
-- with the help of 'loop' from Control.Monad.Extra
-- according to hoogle. But it would not look better at all.
-- loop (\x -> let x' = jacobiStep x in if converged x' then Right x' else Left x') x0
where
(nra, nca) = matrixDim a
(d,l,u) = dluMatrix a
dinv = unfoldMatrix nra nca (\(r,c) ->
if r == c
then 1.0 / matElem d (r,c)
else 0.0)
lu = addMatrix l u
converged x =
let delta = (subMatrix (mulMatrix a x) b) in
let (nrd,ncd) = matrixDim delta in
let err = sum (fmap (\idx -> let v = matElem delta idx in v * v)
(concat (idefix nrd ncd))) in
err < errMax
jacobiStep x =
(mulMatrix dinv (subMatrix b (mulMatrix lu x)))
approximate x =
let x' = jacobiStep x in
if converged x' then x' else approximate x'
wikiExample errMax =
let a = [[ 2.0, 1.0],[5.0,7.0]] in
let b = [[11], [13]] in
jacobi errMax a b [[1.0],[1.0]]
Function idefix, despite it's silly name, IMHO is an eye opener for people coming from non-lazy languages. Their first reflex is to get scared: "What - he creates a list with the indices instead of writing loops? What a waste!" But a waste, it is not in lazy languages. What you see in this function (the list comprehension) produces a lazy list. It is not really created. What happens behind the scene is similar in spirit to what LINQ does in C# - IEnumerator<T> juggling.
We use idefix a second time when we want to sum all elements in our delta. There, we do not care about the concrete structure of the matrix. And so we use the standard prelude function concat to flatten the Matrix into a linear list. Lazy as well, of course. That is the beauty.
The next notable difference to the imperative wikipedia pseudo code is, that using matrix notation is much less complicated compared to nested looping and operating on single cells. Fortunately, the wikipedia article shows both. So, instead of a while loop with 2 nested loops, we only need an equivalent of the outermost while loop. Which is covered by our 2 liner recursive function approximate.
Lessons learned:
Lists and list comprehensions can help simplify code otherwise requiring nested loops. (In lazy languages).
Ocaml and Common Lisp have mutability and built in arrays and loops. That makes a package, very convenient when translating algorithms from imperative languages or imperative pseudo code.
Haskell has immutability and no built in arrays and no loops, but instead it has a similarly powerful set of tools, namely Laziness, tail call optimization and a terse syntax. That combination requires more planning (and writing some usually short helper functions) instead of the classical C approach of "Let's write it all in main()."
Sometimes it is easier to write a 2 line long recursive function than to think about how to abstract it.
In FP, you don't usually try to fit everything "inside the loop." You do one step and pass it on to the next function. There are lots of combinations that are useful in different situations. A common replacement for a while loop is a map followed by a takeWhile or a dropWhile, but there are many other possibilities, up to just plain recursion.
As an exercise to understand recursion by a well-founded relation I decided to implement the extended euclidean algorithm.
The extended euclidean algorithm works on integers, so I need some
well-founded relation on integers. I tried to use the relations in Zwf, but things didn't worked (I need to see more examples). I decided that would easier to map Z to nat with the Z.abs_nat function and then just use Nat.lt as relation. Our friend wf_inverse_image comes to help me. So here what I did:
Require Import ZArith Coq.ZArith.Znumtheory.
Require Import Wellfounded.
Definition fabs := (fun x => Z.abs_nat (Z.abs x)). (* (Z.abs x) is a involutive nice guy to help me in the future *)
Definition myR (x y : Z) := (fabs x < fabs y)%nat.
Definition lt_wf_on_Z := (wf_inverse_image Z nat lt fabs) lt_wf.
The extended euclidean algorithm goes like this:
Definition euclids_type (a : Z) := forall b : Z, Z * Z * Z.
Definition euclids_rec : (forall x : Z, (forall y : Z,(myR y x) -> euclids_type y) -> euclids_type x).
unfold myR, fabs.
refine (fun a rec b => if (Z_eq_dec a 0) then (b, 0, 1)
else let '(g, s, t) := rec (b mod a ) _ a
in (g, t - (b / a) * s, s)
).
apply Zabs_nat_lt. split. apply Z.abs_nonneg. apply Z.mod_bound_abs. assumption.
Defined.
Definition euclids := Fix lt_wf_on_Z _ euclids_rec.
Now let's see if it works:
Compute (euclids 240 46). (* Computation takes a long time and results in a huge term *)
I know that can happen if some definition is opaque, however all my definitions end with Defined.. Okey, something else is opaque, but what?
If is a library definition, then I don't think that would cool to just redefine it in my code.
It seems that my problem is related with this, this other and this too.
I decided to give Program Fixpoint a try, since I never used it. I was surprised to see that I could just copy and paste my program.
Program Fixpoint euclids' (a b: Z) {measure (Z.abs_nat (Z.abs a))} : Z * Z * Z :=
if Z.eq_dec a 0 then (b, 0, 1)
else let '(g, s, t) := euclids' (b mod a) a in
(g, t - (b / a) * s, s).
Next Obligation.
apply Zabs_nat_lt. split. apply Z.abs_nonneg. apply Z.mod_bound_abs. assumption.
Defined.
And even more surprise to see that works just fine:
Compute (euclids' 240 46). (* fast computation gives me (2, -9, 47): Z * Z * Z *)
What is opaque in euclids that is not in euclids' ?
And how to make euclids work?
Okey, something else is opaque, but what?
wf_inverse_image is opaque and so are the lemmas it relies on: Acc_lemma and Acc_inverse_image. If you make these three transparent euclids will compute.
The evidence of well-foundness is basically your parameter you do structural recursion on, so it must be transparent.
And how to make euclids work?
Fortunately, you don't have to roll your own transparent versions of the aforementioned standard definitions as there is well_founded_ltof lemma in Coq.Arith.Wf_nat which is already transparent so we can reuse it:
Lemma lt_wf_on_Z : well_founded myR.
Proof. exact (well_founded_ltof Z fabs). Defined.
That's it! After fixing lt_wf_on_Z the rest of your code just works.
How should I alter the factorial recursion, to calculate only the odd or only the double elements of the factorial?For example if:
multiplyOdds(4)
the result should return 1*3*5*7 =105
I know how recursion works, I just need a bit of help which approach I should use.
Your function multiplyOdds(n) needs to multiply the first n odd numbers? Given that the nth odd number is equal to 2 * n - 1, you can easily write a recursive solution like the one below in Haskell:
multiplyOdds :: Int -> Int
multiplyOdds n = multiplyOddsTail n 1
multiplyOddsTail :: Int -> Int -> Int
multiplyOddsTail n acc = case n of
1 -> acc
n -> multiplyOddsTail (n - 1) (acc * (n * 2 - 1))
As a followup to my earlier question on finding runs of the same character in a string, I would also like to find a functional algorithm to find all substrings of length greater than 2 that are ascending or descending sequences of letters or digits (e,g,: "defgh", "34567", "XYZ", "fedcba", "NMLK", 9876", etc.) in a character string ([Char]). The only sequences that I am considering are substrings of A..Z, a..z, 0..9, and their descending counterparts. The return value should be a list of (zero-based offset, length) pairs. I am translating the "zxcvbn" password strength algorithm from JavaScript (containing imperative code) to Scala. I would like to keep my code as purely functional as possible, for all the usual reasons given for writing in the functional programming style.
My code is written in Scala, but I can probably translate an algorithm in any of Clojure, F#, Haskell, or pseudocode.
Example: For the string qweABCD13987 would return [(3,4),(9,3)].
I have written a rather monsterous function that I will post when I again have access to my work computer, but I am certain that a more elegant solution exists.
Once again, thanks.
I guess a nice solution for this problem is really more complicated than it seems at first.
I'm no Scala Pro, so my solution is surely not optimal and nice, but maybe it gives you some ideas.
The basic idea is to compute the difference between two consecutive characters, afterwards it unfortunately gets a bit messy. Ask me if some of the code is unclear!
object Sequences {
val s = "qweABCD13987"
val pairs = (s zip s.tail) toList // if s might be empty, add a check here
// = List((q,w), (w,e), (e,A), (A,B), (B,C), (C,D), (D,1), (1,3), (3,9), (9,8), (8,7))
// assuming all characters are either letters or digits
val diff = pairs map {case (t1, t2) =>
if (t1.isLetter ^ t2.isLetter) 0 else t1 - t2} // xor could also be replaced by !=
// = List(-6, 18, 36, -1, -1, -1, 19, -2, -6, 1, 1)
/**
*
* #param xs A list indicating the differences between consecutive characters
* #param current triple: (start index of the current sequence;
* number of current elements in the sequence;
* number indicating the direction i.e. -1 = downwards, 1 = upwards, 0 = doesn't matter)
* #return A list of triples similar to the argument
*/
def sequences(xs: Seq[Int], current: (Int, Int, Int) = (0, 1, 0)): List[(Int, Int, Int)] = xs match {
case Nil => current :: Nil
case (1 :: ys) =>
if (current._3 != -1)
sequences(ys, (current._1, current._2 + 1, 1))
else
current :: sequences(ys, (current._1 + current._2 - 1, 2, 1)) // "recompute" the current index
case (-1 :: ys) =>
if (current._3 != 1)
sequences(ys, (current._1, current._2 + 1, -1))
else
current :: sequences(ys, (current._1 + current._2 - 1, 2, -1))
case (_ :: ys) =>
current :: sequences(ys, (current._1 + current._2, 1, 0))
}
sequences(diff) filter (_._2 > 1) map (t => (t._1, t._2))
}
It's always best to split a problem into several smaller subproblems. I wrote a solution in Haskell, which is easier for me. It uses lazy lists, but I suppose you can convert it to Scala either using streams or by making the main function tail recursive and passing the intermediate result as an argument.
-- Mark all subsequences whose adjacent elements satisfy
-- the given predicate. Includes subsequences of length 1.
sequences :: (Eq a) => (a -> a -> Bool) -> [a] -> [(Int,Int)]
sequences p [] = []
sequences p (x:xs) = seq x xs 0 0
where
-- arguments: previous char, current tail sequence,
-- last asc. start offset of a valid subsequence, current offset
seq _ [] lastOffs curOffs = [(lastOffs, curOffs - lastOffs)]
seq x (x':xs) lastOffs curOffs
| p x x' -- predicate matches - we're extending current subsequence
= seq x' xs lastOffs curOffs'
| otherwise -- output the currently marked subsequence and start a new one
= (lastOffs, curOffs - lastOffs) : seq x' xs curOffs curOffs'
where
curOffs' = curOffs + 1
-- Marks ascending subsequences.
asc :: (Enum a, Eq a) => [a] -> [(Int,Int)]
asc = sequences (\x y -> succ x == y)
-- Marks descending subsequences.
desc :: (Enum a, Eq a) => [a] -> [(Int,Int)]
desc = sequences (\x y -> pred x == y)
-- Returns True for subsequences of length at least 2.
validRange :: (Int, Int) -> Bool
validRange (offs, len) = len >= 2
-- Find all both ascending and descending subsequences of the
-- proper length.
combined :: (Enum a, Eq a) => [a] -> [(Int,Int)]
combined xs = filter validRange (asc xs) ++ filter validRange (desc xs)
-- test:
main = print $ combined "qweABCD13987"
Here is my approximation in Clojure:
We can transform the input string so we can apply your previous algorithm to find a solution. The alorithm wont be the most performant but I think you will have a more abstracted and readable code.
The example string can be transformed in the following way:
user => (find-serials "qweABCD13987")
(0 1 2 # # # # 7 8 # # #)
Reusing the previous function "find-runs":
user => (find-runs (find-serials "qweABCD13987"))
([3 4] [9 3])
The final code will look like this:
(defn find-runs [s]
(let [ls (map count (partition-by identity s))]
(filter #(>= (% 1) 3)
(map vector (reductions + 0 ls) ls))))
(def pad "#")
(defn inc-or-dec? [a b]
(= (Math/abs (- (int a) (int b))) 1 ))
(defn serial? [a b c]
(or (inc-or-dec? a b) (inc-or-dec? b c)))
(defn find-serials [s]
(map-indexed (fn [x [a b c]] (if (serial? a b c) pad x))
(partition 3 1 (concat pad s pad))))
find-serials creates a 3 cell sliding window and applies serial? to detect the cells that are the beginning/middle/end of a sequence. The string is conveniently padded so the window is always centered over the original characters.
I'm trying to make a function that will solve a univariante polynomial equation in Standard ML, but it keeps giving me error.
The code is below
(* Eval Function *)
- fun eval (x::xs, a:real):real =
let
val v = x (* The first element, since its not multiplied by anything *)
val count = 1 (* We start counting from the second element *)
in
v + elms(xs, a, count)
end;
(* Helper Function*)
- fun pow (base:real, 0) = 1.0
| pow (base:real, exp:int):real = base * pow(base, exp - 1);
(* A function that solves the equation except the last element in the equation, the constant *)
- fun elms (l:real list, a:real, count:int):real =
if (length l) = count then 0.0
else ((hd l) * pow(a, count)) + elms((tl l), a, count + 1);
now the input should be the coefficient if the polynomial elements and a number to substitute the variable, ie if we have the function 3x^2 + 5x + 1, and we want to substitute x by 2, then we would call the eval as follows:
eval ([1.0, 5.0, 3.0], 2.0);
and the result should be 23.0, but sometimes on different input, its giving me different answers, but on this imput its giving me the following error
uncaught exception Empty raised at:
smlnj/init/pervasive.sml:209.19-209.24
what could be my problem here?
Empty is raised when you run hd or tl on an empty list. hd and tl are almost never used in ML; lists are almost always deconstructed using pattern matching instead; it's much prettier and safer. You don't seem to have a case for empty lists, and I didn't go through your code to figure out what you did, but you should be able to work it out yourself.
After some recursive calls, elms function gets empty list as its argument. Since count is always greater than 0, (length l) = count is always false and the calls hd and tl on empty list are failed right after that.
A good way to fix it is using pattern matching to handle empty lists on both eval and elms:
fun elms ([], _, _) = 0.0
| elms (x::xs, a, count) = (x * pow(a, count)) + elms(xs, a, count + 1)
fun eval ([], _) = 0.0
| eval (x::xs, a) = x + elms(xs, a, 1)