We want to program the gauss-elimination to calculate a basis (linear algebra) as exercise for ourselves. It is not homework.
I thought first of [[Int]] as structure for our matrix. I thought then that we can sort the lists lexicographically. But then we must calculate with the matrix. And there is the problem. Can someone give us some hints.
Consider using matrices from the hmatrix package. Among its modules you can find both a fast implementation of a matrix and a lot of linear algebra algorithms. Browsing their sources might help you with your doubts.
Here's a simple example of adding one row to another by splitting the matrix into rows.
import Numeric.Container
import Data.Packed.Matrix
addRow :: Container Vector t => Int -> Int -> Matrix t -> Matrix t
addRow from to m = let rows = toRows m in
fromRows $ take to rows ++
[(rows !! from) `add` (rows !! to)] ++
drop (to + 1) rows
Another example, this time by using matrix multiplication.
addRow :: (Product e, Container Vector e) =>
Int -> Int -> Matrix e -> Matrix e
addRow from to m = m `add` (e <> m)
where
nrows = rows m
e = buildMatrix nrows nrows
(\(r,c) -> if (r,c) /= (to,from) then 0 else 1)
Cf. Container, Vector, Product.
It will be easier if you use [[Rational]] instead of [[Int]] since you get nice division.
You probably want to start by implementing the elementary row operations.
swap :: Int -> Int -> [[Rational]] -> [[Rational]
swap r1 r2 m = --a matrix with r1 and r2 swapped
scale :: Int -> Rational -> [[Rational]] -> [[Rational]]
scale r c m = --a matrix with row r multiplied by c
addrow :: Int -> Int -> Rational -> [[Rational]] -> [[Rational]]
addrow r1 r2 c m = --a matrix with (c * r1) added to r2
In order to actually do guassian elimination, you need a way to decide what multiple of one row to add to another to get a zero. So given two rows..
5 4 3 2 1
7 6 5 4 3
We want to add c times row 1 to row 2 so that the 7 becomes a zero. So 7 + c * 5 = 0 and c = -7/5. So in order to solve for c all we need are the first elements of each row. Here's a function that finds c:
whatc :: Rational -> Rational -> Rational
whatc _ 0 = 0
whatc a b = - a / b
Also, as others have said, using lists to represent your matrix will give you worse performance. But if you're just trying to understand the algorithm, lists should be fine.
Related
I came across this question in a coding competition. Given a number n, concatenate the binary representation of first n positive integers and return the decimal value of the resultant number formed. Since the answer can be large return answer modulo 10^9+7.
N can be as large as 10^9.
Eg:- n=4. Number formed=11011100(1=1,10=2,11=3,100=4). Decimal value of 11011100=220.
I found a stack overflow answer to this question but the problem is that it only contains a O(n) solution.
Link:- concatenate binary of first N integers and return decimal value
Since n can be up to 10^9 we need to come up with solution that is better than O(n).
Here's some Python code that provides a fast solution; it uses the same ideas as in Abhinav Mathur's post. It requires Python >= 3.8, but it doesn't use anything particularly fancy from Python, and could easily be translated into another language. You'd need to write algorithms for modular exponentiation and modular inverse if they're not already available in the target language.
First, for testing purposes, let's define the slow and obvious version:
# Modulus that results are reduced by,
M = 10 ** 9 + 7
def slow_binary_concat(n):
"""
Concatenate binary representations of 1 through n (inclusive).
Reinterpret the resulting binary string as an integer.
"""
concatenation = "".join(format(k, "b") for k in range(n + 1))
return int(concatenation, 2) % M
Checking that we get the expected result:
>>> slow_binary_concat(4)
220
>>> slow_binary_concat(10)
462911642
Now we'll write a faster version. First, we split the range [1, n) into subintervals such that within each subinterval, all numbers have the same length in binary. For example, the range [1, 10) would be split into four subintervals: [1, 2), [2, 4), [4, 8) and [8, 10). Here's a function to do that splitting:
def split_by_bit_length(n):
"""
Split the numbers in [1, n) by bit-length.
Produces triples (a, b, 2**k). Each triple represents a subinterval
[a, b) of [1, n), with a < b, all of whose elements has bit-length k.
"""
a = 1
while n > a:
b = 2 * a
yield (a, min(n, b), b)
a = b
Example output:
>>> list(split_by_bit_length(10))
[(1, 2, 2), (2, 4, 4), (4, 8, 8), (8, 10, 16)]
Now for each subinterval, the value of the concatenation of all numbers in that subinterval is represented by a fairly simple mathematical sum, which can be computed in exact form. Here's a function to compute that sum modulo M:
def subinterval_concat(a, b, l):
"""
Concatenation of values in [a, b), all of which have the same bit-length k.
l is 2**k.
Equivalently, sum(i * l**(b - 1 - i)) for i in range(a, b)) modulo M.
"""
n = b - a
inv = pow(l - 1, -1, M)
q = (pow(l, n, M) - 1) * inv
return (a * q + (q - n) * inv) % M
I won't go into the evaluation of the sum here: it's a bit off-topic for this site, and it's hard to express without a good way to render formulas. If you want the details, that's a topic for https://math.stackexchange.com, or a page of fairly simple algebra.
Finally, we want to put all the intervals together. Here's a function to do that.
def fast_binary_concat(n):
"""
Fast version of slow_binary_concat.
"""
acc = 0
for a, b, l in split_by_bit_length(n + 1):
acc = (acc * pow(l, b - a, M) + subinterval_concat(a, b, l)) % M
return acc
A comparison with the slow version shows that we get the same results:
>>> fast_binary_concat(4)
220
>>> fast_binary_concat(10)
462911642
But the fast version can easily be evaluated for much larger inputs, where using the slow version would be infeasible:
>>> fast_binary_concat(10**9)
827129560
>>> fast_binary_concat(10**18)
945204784
You just have to note a simple pattern. Taking up your example for n=4, let's gradually build the solution starting from n=1.
1 -> 1 #1
2 -> 2^2(1) + 2 #6
3 -> 2^2[2^2(1)+2] + 3 #27
4 -> 2^3{2^2[2^2(1)+2]+3} + 4 #220
If you expand the coefficients of each term for n=4, you'll get the coefficients as:
1 -> (2^3)*(2^2)*(2^2)
2 -> (2^3)*(2^2)
3 -> (2^3)
4 -> (2^0)
Let the N be total number of bits in the string representation of our required number, and D(x) be the number of bits in x. The coefficients can then be written as
1 -> 2^(N-D(1))
2 -> 2^(N-D(1)-D(2))
3 -> 2^(N-D(1)-D(2)-D(3))
... and so on
Since the value of D(x) will be the same for all x between range (2^t, 2^(t+1)-1) for some given t, you can break the problem into such ranges and solve for each range using mathematics (not iteration). Since the number of such ranges will be log2(Given N), this should work in the given time limit.
As an example, the various ranges become:
1. 1 (D(x) = 1)
2. 2-3 (D(x) = 2)
3. 4-7 (D(x) = 3)
4. 8-15 (D(x) = 4)
Haskell replaces for loops over iteratable objects with map :: (a -> b) -> [a] -> [b] or
fmap :: (a -> b) -> f a -> f b. (This question isn't limited to Haskell, I'm just using the syntax here.)
Is there something similar that replaces a while loop, like
wmap :: ([a] -> b) -> [a] -> ([b] -> Bool) -> [b]?
This function returns a list of b.
The first argument is a function that takes a list and computes a value that will end up in the list returned by wmap (so it's a very specific kind of while loop).
The second argument is the list that we use as our starting point.
The third argument is a function that evaluates the stoping criteria.
And as a functor,
wfmap :: (f a -> b) -> f a -> (f b -> Bool) -> f b
For example, a Jacobi solver would look like this (with b now the same type as a):
jacobi :: ([a] -> [a]) -> [a] -> ([a] -> Bool) -> [a]
What I'm looking for isn't really pure. wmap could have values that mutate internally, but only exist inside the function. It also has nondeterministic runtime, if it terminates at all.
In the case of a Gauss-Seidel solver, there would be no return value, since the [a] would be modified in place.
Something like this:
gs :: ([a] -> [a]) -> [a] -> ([a] -> Bool) -> ???
Does wmap or wfmap exist as part of any language by default, and what is it called?
Answer 1 (thanks to Bergi): Instead of the silly wmap/wfmap signature, we already have until.
Does an in place version of until exist for things like gs?
There is a proverb in engineering which states "Don't generalize before you have at least 3 implementations". There is some truth to it - especially when looking for new functional iteration concepts before doing it by foot a few times.
"Doing it by foot" here means, you should - if there is no friendly helper function you know of - resort to recursion. Write your "special cases" recursively. Preferably in a tail recursive form. Then, if you start to see recurring patterns, you might come up with a way to refactor into some recurring iteration scheme and its "kernel".
Let's for the sake of clarification of the above, assume you never heard of foldl and you want accumulate a result from iteration over a list... Then, you would write something like:
myAvg values =
total / (length values)
where
mySum acc [] = acc
mySum acc (x:xs) = mySum (acc + x) xs
total = mySum 0 values
And after doing this a couple of times, the pattern might show, that the recursions in those where clauses always look darn similar. You might then come up with a name like "fold" or "reduce" for that inner recursion snippet and end up with:
myAvg values = (foldl (+) 0.0 values) / fromIntegral (length values) :: Float
So, if you are looking for helper functions which help with your use-cases, my advice is you first write a few instances as recursive functions and then look for patterns.
So, with all that said, let's get our fingers wet and see how the Jacobi algorithm could translate to Haskell. Just so we have something to talk about. Now - usually I do not use Haskell for anything requiring arrays (containers with O(1) element access), because there are at least 5 array packages I know of and I would have to read for 2 days to decide which one is suitable for my application. TL;DR;). So I stick with lists and NO package dependencies beyond prelude in the code below. But that is - given the size of the example equations we try to solve is tiny - not a bad thing at all. Plus, the code demonstrates, that list comprehensions in lazy Haskell allow for un-imperative and yet performant operations on sets of cells (e.g. in the matrix), without any need for explicit looping.
type Matrix = [[Double]]
-- sorry - my mind went blank while looking for a better name for this...
-- but it is useful nonetheless
idefix nr nc =
[ [(r,c) | c <- [0..nc-1]] | r <- [0..nr-1]]
matElem m (r,c) = (m !! r) !! c
transpose (r,c) = (c,r)
matrixDim m = (length m, length . head $ m)
-- constructs a Matrix by enumerating the indices and querying
-- 'unfolder' for a value.
-- try "unfoldMatrix 3 3 id" and you see how indices relate to
-- cells in the matrix.
unfoldMatrix nr nc unfolder =
fmap (\row -> fmap (\cell -> unfolder cell) row) $ idefix nr nc
-- Not really needed for Jacobi problem but good
-- training to get our fingers wet with unfoldMatrix.
transposeMatrix m =
let (nr,nc) = matrixDim m in
unfoldMatrix nc nr (matElem m . transpose)
addMatrix m1 m2
| (matrixDim m1) == (matrixDim m2) =
let (nr,nc) = matrixDim m1 in
unfoldMatrix nr nc (\idx -> matElem m1 idx + matElem m2 idx)
subMatrix m1 m2
| (matrixDim m1) == (matrixDim m2) =
let (nr,nc) = matrixDim m1 in
unfoldMatrix nr nc (\idx -> matElem m1 idx - matElem m2 idx)
dluMatrix :: Matrix -> (Matrix,Matrix,Matrix)
dluMatrix m
| (fst . matrixDim $ m) == (snd . matrixDim $ m) =
let n = fst . matrixDim $ m in
(unfoldMatrix n n (\(r,c) -> if r == c then matElem m (r,c) else 0.0)
,unfoldMatrix n n (\(r,c) -> if r > c then matElem m (r,c) else 0.0)
,unfoldMatrix n n (\(r,c) -> if c > r then matElem m (r,c) else 0.0)
)
mulMatrix m1 m2
| (snd . matrixDim $ m1) == (fst . matrixDim $ m2) =
let (nr, nc) = ((fst . matrixDim $ m1),(snd . matrixDim $ m2)) in
unfoldMatrix nr nc
(\(ro,co) ->
sum [ matElem m1 (ro,i) * matElem m2 (i,co) | i <- [0..nr-1]]
)
isSquareMatrix m = let (nr,nc) = matrixDim m in nr == nc
jacobi :: Double -> Matrix -> Matrix -> Matrix -> Matrix
jacobi errMax a b x0
| isSquareMatrix a && (snd . matrixDim $ a) == (fst . matrixDim $ b) =
approximate x0
-- We could possibly avoid our hand rolled recursion
-- with the help of 'loop' from Control.Monad.Extra
-- according to hoogle. But it would not look better at all.
-- loop (\x -> let x' = jacobiStep x in if converged x' then Right x' else Left x') x0
where
(nra, nca) = matrixDim a
(d,l,u) = dluMatrix a
dinv = unfoldMatrix nra nca (\(r,c) ->
if r == c
then 1.0 / matElem d (r,c)
else 0.0)
lu = addMatrix l u
converged x =
let delta = (subMatrix (mulMatrix a x) b) in
let (nrd,ncd) = matrixDim delta in
let err = sum (fmap (\idx -> let v = matElem delta idx in v * v)
(concat (idefix nrd ncd))) in
err < errMax
jacobiStep x =
(mulMatrix dinv (subMatrix b (mulMatrix lu x)))
approximate x =
let x' = jacobiStep x in
if converged x' then x' else approximate x'
wikiExample errMax =
let a = [[ 2.0, 1.0],[5.0,7.0]] in
let b = [[11], [13]] in
jacobi errMax a b [[1.0],[1.0]]
Function idefix, despite it's silly name, IMHO is an eye opener for people coming from non-lazy languages. Their first reflex is to get scared: "What - he creates a list with the indices instead of writing loops? What a waste!" But a waste, it is not in lazy languages. What you see in this function (the list comprehension) produces a lazy list. It is not really created. What happens behind the scene is similar in spirit to what LINQ does in C# - IEnumerator<T> juggling.
We use idefix a second time when we want to sum all elements in our delta. There, we do not care about the concrete structure of the matrix. And so we use the standard prelude function concat to flatten the Matrix into a linear list. Lazy as well, of course. That is the beauty.
The next notable difference to the imperative wikipedia pseudo code is, that using matrix notation is much less complicated compared to nested looping and operating on single cells. Fortunately, the wikipedia article shows both. So, instead of a while loop with 2 nested loops, we only need an equivalent of the outermost while loop. Which is covered by our 2 liner recursive function approximate.
Lessons learned:
Lists and list comprehensions can help simplify code otherwise requiring nested loops. (In lazy languages).
Ocaml and Common Lisp have mutability and built in arrays and loops. That makes a package, very convenient when translating algorithms from imperative languages or imperative pseudo code.
Haskell has immutability and no built in arrays and no loops, but instead it has a similarly powerful set of tools, namely Laziness, tail call optimization and a terse syntax. That combination requires more planning (and writing some usually short helper functions) instead of the classical C approach of "Let's write it all in main()."
Sometimes it is easier to write a 2 line long recursive function than to think about how to abstract it.
In FP, you don't usually try to fit everything "inside the loop." You do one step and pass it on to the next function. There are lots of combinations that are useful in different situations. A common replacement for a while loop is a map followed by a takeWhile or a dropWhile, but there are many other possibilities, up to just plain recursion.
I have a set of problems that I've been working through and can't seem to understand what the last one is asking. Here is the first problem, and my solution to it:
a) Often we are interested in computing ∑i=m..n f(i), the sum of function values f(i) for i = m through n. Define sigma f m n which computes ∑i=m..n f(i). This is different from defining sigma (f, m, n).
fun sigma f m n = if (m=n) then f(m) else (f(m) + sigma f (m+1) n);
The second problem, and my solution:
b) In the computation of sigma above, the index i goes from current
i to next value i+1. We may want to compute the sum of f(i) where i
goes from current i to the next, say i+2, not i+1. If we send this
information as an argument, we can compute more generalized
summation. Define ‘sum f next m n’ to compute such summation, where
‘next’ is a function to compute the next index value from the
current index value. To get ‘sigma’ in (a), you send the successor
function as ‘next’.
fun sum f next m n = if (m>=n) then f(m) else (f(m) + sum f (next) (next(m)) n);
And the third problem, with my attempt:
c) Generalizing sum in (b), we can compute not only summation but also
product and other forms of accumulation. If we want to compute sum in
(b), we send addition as an argument; if we want to compute the
product of function values, we send multiplication as an argument for
the same parameter. We also have to send the identity of the
operator. Define ‘accum h v f next m n’ to compute such accumulation,
where h is a two-variable function to do accumulation, and v is the
base value for accumulation. If we send the multiplication function
for h, 1 for v, and the successor function as ‘next’, this ‘accum’
computes ∏i=m..n f(i). Create examples whose ‘h’ is not addition or
multiplication, too.
fun accum h v f next m n = if (m>=n) then f(m) else (h (f(m)) (accum (h) (v) (f) (next) (next(m)) n));
In problem C, I'm unsure of what i'm suppose to do with my "v" argument. Right now the function will take any interval of numbers m - n and apply any kind of operation to them. For example, I could call my function
accum mult (4?) double next3 1 5;
where double is a doubling function and next3 adds 3 to a given value. Any ideas on how i'm suppoes to utilize the v value?
This set of problems is designed to lead to implementation of accumulation function. It takes
h - combines previous value and current value to produce next value
v - starting value for h
f - function to be applied to values from [m, n) interval before passing them to h function
next - computes next value in sequence
m and n - boundaries
Here is how I'd define accum:
fun accum h v f next m n = if m >= n then v else accum h (h (f m) v) f next (next m) n
Examples that were described in C will look like this:
fun sum x y = x + y;
fun mult x y = x * y;
fun id x = x;
accum sum 0 id next 1 10; (* sum [1, 10) staring 0 *)
accum mult 1 id next 1 10; (* prod [1, 10) starting 1 *)
For example, you can calculate sum of numbers from 1 to 10 and plus 5 if you pass 5 as v in first example.
The instructions will make more sense if you consider the possibility of an empty interval.
The "sum" of a single value n is n. The sum of no values is zero.
The "product" of a single value n is n. The product of no values is one.
A list of a single value n is [n] (n::nil). A list of no values is nil.
Currently, you're assuming that m ≤ n, and treating m = n as a special case that returns f m. Another approach is to treat m > n as the special case, returning v. Then, when m = n, your function will automatically return h v (f m), which is the same as (f m) (provided that v was selected properly for this h).
To be honest, though, I think the v-less approach is fine when the function's arguments specify an interval of the form [m,n], since there's no logical reason that such a function would support an empty interval. (I mean, [m,m−1] isn't so much "the empty interval" as it is "obvious error".) The v-ful approach is chiefly useful when the function's arguments specify a list or set of elements in some way that really could conceivably be empty, e.g. as an 'a list.
I am trying to make a function to round a floating point number to a defined length of digits. What I have come up with so far is this:
import Numeric;
digs :: Integral x => x -> [x] <br>
digs 0 = [] <br>
digs x = digs (x `div` 10) ++ [x `mod` 10]
roundTo x t = let d = length $ digs $ round x <br>
roundToMachine x t = (fromInteger $ round $ x * 10^^t) * 10^^(-t)
in roundToMachine x (t - d)
I am using the digs function to determine the number of digits before the comma to optimize the input value (i.e. move everything past the comma, so 1.234 becomes 0.1234 * 10^1)
The roundTo function seems to work for most input, however for some inputs I get strange results, e.g. roundTo 1.0014 4 produces 1.0010000000000001 instead of 1.001.
The problem in this example is caused by calculating 1001 * 1.0e-3 (which returns 1.0010000000000001)
Is this simply a problem in the number representation of Haskell I have to live with or is there a better way to round a floating point number to a specific length of digits?
I realise this question was posted almost 2 years back, but I thought I'd have a go at an answer that didn't require a string conversion.
-- x : number you want rounded, n : number of decimal places you want...
truncate' :: Double -> Int -> Double
truncate' x n = (fromIntegral (floor (x * t))) / t
where t = 10^n
-- How to answer your problem...
λ truncate' 1.0014 3
1.001
-- 2 digits of a recurring decimal please...
λ truncate' (1/3) 2
0.33
-- How about 6 digits of pi?
λ truncate' pi 6
3.141592
I've not tested it thoroughly, so if you find numbers this doesn't work for let me know!
This isn't a haskell problem as much as a floating point problem. Since each floating point number is implemented in a finite number of bits, there exist numbers that can't be represented completely accurately. You can also see this by calculating 0.1 + 0.2, which awkwardly returns 0.30000000000000004 instead of 0.3. This has to do with how floating point numbers are implemented for your language and hardware architecture.
The solution is to continue using your roundTo function for doing computation (it's as accurate as you'll get without special libraries), but if you want to print it to the screen then you should use string formatting such as the Text.Printf.printf function. You can specify the number of digits to round to when converting to a string with something like
import Text.Printf
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf ("%0." ++ show n ++ "f") f
But as I mentioned, this will return a string rather than a number.
EDIT:
A better way might be
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf (printf "%%0.%df" n) f
but I haven't benchmarked to see which is actually faster. Both will work exactly the same though.
EDIT 2:
As #augustss has pointed out, you can do it even easier with just
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr = printf "%0.*f"
which uses a formatting rule that I was previously unaware of.
I also think that avoiding string conversion is the way to go; however, I would modify the previous post (from schanq) to use round instead of floor:
round' :: Double -> Integer -> Double
round' num sg = (fromIntegral . round $ num * f) / f
where f = 10^sg
> round' 4 3.99999
4.0
> round' 4 4.00001
4.0
I have this complex iterations program I wrote in TI Basic to perform a basic iteration on a complex number and then give the magnitude of the result:
INPUT “SEED?”, C
INPUT “ITERATIONS?”, N
C→Z
For (I,1,N)
Z^2 + C → Z
DISP Z
DISP “MAGNITUDE”, sqrt ((real(Z)^2 + imag(Z)^2))
PAUSE
END
What I would like to do is make a Haskell version of this to wow my teacher in an assignment. I am still only learning and got this far:
fractal ::(RealFloat a) =>
(Complex a) -> (Integer a) -> [Complex a]
fractal c n | n == a = z : fractal (z^2 + c)
| otherwise = error "Finished"
What I don't know how to do is how to make it only iterate n times, so I wanted to have it count up a and then compare it to n to see if it had finished.
How would I go about this?
Newacct's answer shows the way:
fractal c n = take n $ iterate (\z -> z^2 + c) c
Iterate generates the infinite list of repeated applications.
Ex:
iterate (2*) 1 == [1, 2, 4, 8, 16, 32, ...]
Regarding the IO, you'll have to do some monadic computations.
import Data.Complex
import Control.Monad
fractal c n = take n $ iterate (\z -> z^2 + c) c
main :: IO ()
main = do
-- Print and read (you could even omit the type signatures here)
putStr "Seed: "
c <- readLn :: IO (Complex Double)
putStr "Number of iterations: "
n <- readLn :: IO Int
-- Working with each element the result list
forM_ (fractal c n) $ \current -> do
putStrLn $ show current
putStrLn $ "Magnitude: " ++ (show $ magnitude current)
Since Complex is convertible from and to strings by default, you can use readLn to read them from the console (format is Re :+ Im).
Edit: Just for fun, one could desugar the monadic syntax and type signatures which would compress the whole programm to this:
main =
(putStr "Seed: ") >> readLn >>= \c ->
(putStr "Number of iterations: ") >> readLn >>= \n ->
forM_ (take n $ iterate (\z -> z^2 + c) c) $ \current ->
putStrLn $ show current ++ "\nMagnitude: " ++ (show $ magnitude current)
Edit #2: Some Links related to plotting and Mandelbrot's sets.
Fractal plotter
Plotting with
Graphics.UI
Simplest solution
(ASCII-ART)
Well you can always generate an infinite list of results of repeated applications and take the first n of them using take. And the iterate function is useful for generating an infinite list of results of repeated applications.
If you'd like a list of values:
fractalList c n = fractalListHelper c c n
where
fractalListHelper z c 0 = []
fractalListHelper z c n = z : fractalListHelper (z^2 + c) c (n-1)
If you only care about the last result:
fractal c n = fractalHelper c c n
where
fractalHelper z c 0 = z
fractalHelper z c n = fractalHelper (z^2 + c) c (n-1)
Basically, in both cases you need a helper function to the counting and accumulation. Now I'm sure there's a better/less verbose way to do this, but I'm pretty much a Haskell newbie myself.
Edit: just for kicks, a foldr one-liner:
fractalFold c n = foldr (\c z -> z^2 + c) c (take n (repeat c))
(although, the (take n (repeat c)) thing seems kind of unnecessary, there has to be an even better way)