I found the following code that is meant to compute a^b (Cracking the Coding Interview, Ch. VI Big O).
What's the logic of return a * power(a, b - 1); ? Is it recursion
of some sort?
Is power a keyword here or just pseudocode?
int power(int a, int b)
{ if (b < 0) {
return a; // error
} else if (b == 0) {
return 1;
} else {
return a * power(a, b - 1);
}
}
Power is just the name of the function.
Ya this is RECURSION as we are representing a given problem in terms of smaller problem of similar type.
let a=2 and b=4 =calculate= power(2,4) -- large problem (original one)
Now we will represent this in terms of smaller one
i.e 2*power(2,4-1) -- smaller problem of same type power(2,3)
i.e a*power(a,b-1)
If's in the start are for controlling the base cases i.e when b goes below 1
This is a recursive function. That is, the function is defined in terms of itself, with a base case that prevents the recursion from running indefinitely.
power is the name of the function.
For example, 4^3 is equal to 4 * 4^2. That is, 4 raised to the third power can be calculated by multiplying 4 and 4 raised to the second power. And 4^2 can be calculated as 4 * 4^1, which can be simplified to 4 * 4, since the base case of the recursion specifies that 4^1 = 4. Combining this together, 4^3 = 4 * 4^2 = 4 * 4 * 4^1 = 4 * 4 * 4 = 64.
power here is just the name of the function that is defined and NOT a keyword.
Now, let consider that you want to find 2^10. You can write the same thing as 2*(2^9), as 2*2*(2^8), as 2*2*2*(2^7) and so on till 2*2*2*2*2*2*2*2*2*(2^1).
This is what a * power(a, b - 1) is doing in a recursive manner.
Here is a dry run of the code for finding 2^4:
The initial call to the function will be power(2,4), the complete stack trace is shown below
power(2,4) ---> returns a*power(2,3), i.e, 2*4=16
|
power(2,3) ---> returns a*power(2,2), i.e, 2*3=8
|
power(2,2) ---> returns a*power(2,1), i.e, 2*2=4
|
power(2,1) ---> returns a*power(2,0), i.e, 2*1=2
|
power(2,0) ---> returns 1 as b == 0
Related
Forgive me if this is basic but I'm new to Elixir and trying to understand the language.
Say I have this code
defmodule Test do
def mult([]), do: 1
def mult([head | tail]) do
IO.puts "head is: #{head}"
IO.inspect tail
head*mult(tail)
end
end
when I run with this: Test.mult([1,5,10])
I get the following output
head is: 1
[5, 10]
head is: 5
'\n'
head is: 10
[]
50
But I'm struggling to understand what's going on because if I separately try to do this:
[h | t] = [1,5,10]
h * t
obviously I get an error, can someone explain what I'm missing?
Your function breaks down as:
mult([1 | [5, 10]])
1 * mult([5 | [10]])
1 * 5 * mult([10 | []])
1 * 5 * 10 * mult([])
1 * 5 * 10 * 1
50
The '\n' is actually [10] and is due to this: Elixir lists interpreted as char lists.
IO.inspect('\n', charlists: :as_lists)
[10]
Consider the arguments being passed to mult on each invocation.
When you do Test.mult([1,5,10]) first it checks the first function clause; that is can [1,5,10] be made to match []. Elixir will try to make the two expressions match if it can. In this case it cannot so then it tries the next function clause; can [1,5,10] be made to match [head|tail]? Yes it can so it assigns the first element (1) to head and the remaining elements [5,10] to tail. Then it recursively calls the function again but this time with the list [5,10]
Again it tries to match [5,10] to []; again this cannot be made to match so it drops down to [head|tail]. This time head is 5 and tail is 10. So again the function is called recursively with [10]
Again, can [10] be made to match []? No. So again it hits [head|tail] and assigns head = 10 and tail = [] (remember there's always an implied empty list at the end of every list).
Last go round; now [] definitely matches [] so it returns 1. Then the prior head * mult(tail) is evaluated (1 * 10) and that result is returned to the prior call on the stack. Evaluated again head (5) * mult(tail) (10) = 50. Final unwind of the stack head (1) * mult(tail) (50) = 50. Hence the overall value of the function is 50.
Remember that Elixir cannot totally evaluate any function call until it evaluates all subsequent function calls. So it hangs on to the intermediate values in order to compute the final value of the function.
Now consider your second code fragment in terms of pattern matching. [h|t] = [1,5,10] will assign h = 1 and t = [5,10]. h*t means 1 * [5,10]. Since those are fundamentally different types there's no inbuilt definition for multiplication in this case.
I am trying to find the smallest index containing the value i in a sorted array. If this i value is not present I want -1 to be returned. I am using a binary search recursive subroutine. The problem is that I can't really stop this recursion and I get lot of answers(one right and the rest wrong). And sometimes I get an error called "segmentation fault: 11" and I don't really get any results.
I've tried to delete this call random_number since I already have a sorted array in my main program, but it did not work.
program main
implicit none
integer, allocatable :: A(:)
real :: MAX_VALUE
integer :: i,j,n,s, low, high
real :: x
N= 10 !size of table
MAX_VALUE = 10
allocate(A(n))
s = 5 ! searched value
low = 1 ! lower limit
high = n ! highest limit
!generate random table of numbers (from 0 to 1000)
call Random_Seed
do i=1, N
call Random_Number(x) !returns random x >= 0 and <1
A(i)= anint(MAX_VALUE*x)
end do
call bubble(n,a)
print *,' '
write(*,10) (a(i),i=1,N)
10 format(10i6)
call bsearch(A,n,s,low,high)
deallocate(A)
end program main
The sort subroutine:
subroutine sort(p,q)
implicit none
integer(kind=4), intent(inout) :: p, q
integer(kind=4) :: temp
if (p>q) then
temp = p
p = q
q = temp
end if
return
end subroutine sort
The bubble subroutine:
subroutine bubble(n,arr)
implicit none
integer(kind=4), intent(inout) :: n
integer(kind=4), intent(inout) :: arr(n)
integer(kind=4) :: sorted(n)
integer :: i,j
do i=1, n
do j=n, i+1, -1
call sort(arr(j-1), arr(j))
end do
end do
return
end subroutine bubble
recursive subroutine bsearch(b,n,i,low,high)
implicit none
integer(kind=4) :: b(n)
integer(kind=4) :: low, high
integer(kind=4) :: i,j,x,idx,n
real(kind=4) :: r
idx = -1
call random_Number(r)
x = low + anint((high - low)*r)
if (b(x).lt.i) then
low = x + 1
call bsearch(b,n,i,low,high)
else if (b(x).gt.i) then
high = x - 1
call bsearch(b,n,i,low,high)
else
do j = low, high
if (b(j).eq.i) then
idx = j
exit
end if
end do
end if
! Stop if high = low
if (low.eq.high) then
return
end if
print*, i, 'found at index ', idx
return
end subroutine bsearch
The goal is to get the same results as my linear search. But I'am getting either of these answers.
Sorted table:
1 1 2 4 5 5 6 7 8 10
5 found at index 5
5 found at index -1
5 found at index -1
or if the value is not found
2 2 3 4 4 6 6 7 8 8
Segmentation fault: 11
There are a two issues causing your recursive search routine bsearch to either stop with unwanted output, or result in a segmentation fault. Simply following the execution logic of your program at the hand of the examples you provided, elucidate the matter:
1) value present and found, unwanted output
First, consider the first example where array b contains the value i=5 you are searching for (value and index pointed out with || in the first two lines of the code block below). Using the notation Rn to indicate the the n'th level of recursion, L and H for the lower- and upper bounds and x for the current index estimate, a given run of your code could look something like this:
b(x): 1 1 2 4 |5| 5 6 7 8 10
x: 1 2 3 4 |5| 6 7 8 9 10
R0: L x H
R1: Lx H
R2: L x H
5 found at index 5
5 found at index -1
5 found at index -1
In R0 and R1, the tests b(x).lt.i and b(x).gt.i in bsearch work as intended to reduce the search interval. In R2 the do-loop in the else branch is executed, idx is assigned the correct value and this is printed - as intended. However, a return statement is now encountered which will return control to the calling program unit - in this case that is first R1(!) where execution will resume after the if-else if-else block, thus printing a message to screen with the initial value of idx=-1. The same happens upon returning from R0 to the main program. This explains the (unwanted) output you see.
2) value not present, segmentation fault
Secondly, consider the example resulting in a segmentation fault. Using the same notation as before, a possible run could look like this:
b(x): 2 2 3 4 4 6 6 7 8 8
x: 1 2 3 4 5 6 7 8 9 10
R0: L x H
R1: L x H
R2: L x H
R3: LxH
R4: H xL
.
.
.
Segmentation fault: 11
In R0 to R2 the search interval is again reduced as intended. However, in R3 the logic fails. Since the search value i is not present in array b, one of the .lt. or .gt. tests will always evaluate to .true., meaning that the test for low .eq. high to terminate a search is never reached. From this point onwards, the logic is no longer valid (e.g. high can be smaller than low) and the code will continue deepening the level of recursion until the call stack gets too big and a segmentation fault occurs.
These explained the main logical flaws in the code. A possible inefficiency is the use of a do-loop to find the lowest index containing a searched for value. Consider a case where the value you are looking for is e.g. i=8, and that it appears in the last position in your array, as below. Assume further that by chance, the first guess for its position is x = high. This implies that your code will immediately branch to the do-loop, where in effect a linear search is done of very nearly the entire array, to find the final result idx=9. Although correct, the intended binary search rather becomes a linear search, which could result in reduced performance.
b(x): 2 2 3 4 4 6 6 7 |8| 8
x: 1 2 3 4 5 6 7 8 |9| 10
R0: L xH
8 found at index 9
Fixing the problems
At the very least, you should move the low .eq. high test to the start of the bsearch routine, so that recursion stops before invalid bounds can be defined (you then need an additional test to see if the search value was found or not). Also, notify about a successful search right after it occurs, i.e. after the equality test in your do-loop, or the additional test just mentioned. This still does not address the inefficiency of a possible linear search.
All taken into account, you are probably better off reading up on algorithms for finding a "leftmost" index (e.g. on Wikipedia or look at a tried and tested implementation - both examples here use iteration instead of recursion, perhaps another improvement, but the same principles apply) and adapt that to Fortran, which could look something like this (only showing new code, ...refer to existing code in your examples):
module mod_search
implicit none
contains
! Function that uses recursive binary search to look for `key` in an
! ordered `array`. Returns the array index of the leftmost occurrence
! of `key` if present in `array`, and -1 otherwise
function search_ordered (array, key) result (idx)
integer, intent(in) :: array(:)
integer, intent(in) :: key
integer :: idx
! find left most array index that could possibly hold `key`
idx = binary_search_left(1, size(array))
! if `key` is not found, return -1
if (array(idx) /= key) then
idx = -1
end if
contains
! function for recursive reduction of search interval
recursive function binary_search_left(low, high) result(idx)
integer, intent(in) :: low, high
integer :: idx
real :: r
if (high <= low ) then
! found lowest possible index where target could be
idx = low
else
! new guess
call random_number(r)
idx = low + floor((high - low)*r)
! alternative: idx = low + (high - low) / 2
if (array(idx) < key) then
! continue looking to the right of current guess
idx = binary_search_left(idx + 1, high)
else
! continue looking to the left of current guess (inclusive)
idx = binary_search_left(low, idx)
end if
end if
end function binary_search_left
end function search_ordered
! Move your routines into a module
subroutine sort(p,q)
...
end subroutine sort
subroutine bubble(n,arr)
...
end subroutine bubble
end module mod_search
! your main program
program main
use mod_search, only : search_ordered, sort, bubble ! <---- use routines from module like so
implicit none
...
! Replace your call to bsearch() with the following:
! call bsearch(A,n,s,low,high)
i = search_ordered(A, s)
if (i /= -1) then
print *, s, 'found at index ', i
else
print *, s, 'not found!'
end if
...
end program main
Finally, depending on your actual use case, you could also just consider using the Fortran intrinsic procedure minloc saving you the trouble of implementing all this functionality yourself. In this case, it can be done by making the following modification in your main program:
! i = search_ordered(a, s) ! <---- comment out this line
j = minloc(abs(a-s), dim=1) ! <---- replace with these two
i = merge(j, -1, a(j) == s)
where j returned from minloc will be the lowest index in the array a where s may be found, and merge is used to return j when a(j) == s and -1 otherwise.
Some cases of periodic boundary conditions (PBC) can be imposed very efficiently on integers by simply doing:
myWrappedWithinPeriodicBoundary = myUIntValue & mask
This works when the boundary is the half open range [0, upperBound), where the (exclusive) upperBound is 2^exp so that
mask = (1 << exp) - 1
For example:
let pbcUpperBoundExp = 2 // so the periodic boundary will be [0, 4)
let mask = (1 << pbcUpperBoundExp) - 1
for x in -7 ... 7 { print(x & mask, terminator: " ") }
(in Swift) will print:
1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
Question: Is there any (roughly similar) efficient method for imposing (some cases of) PBCs on floating point-numbers (32 or 64-bit IEEE-754)?
There are several reasonable approaches:
fmod(x,1)
modf(x,&dummy) — has the advantage of knowing its divisor statically, but in my testing comes from libc.so.6 even with -ffast-math
x-floor(x) (suggested by Jens in a comment) — supports negative inputs directly
Manual bit-twiddling direct implementation
Manual bit-twiddling implementation of floor
The first two preserve the sign of their input; you can add 1 if it's negative.
The two bit manipulations are very similar: you identify which significand bits correspond to the integer portion, and mask them (for the direct implementation) or the rest (to implement floor) off. The direct implementation can be completed either with a floating-point division or with a shift to reassemble the double manually; the former is 28% faster even given hardware CLZ. The floor implementation can immediately reconstitute a double: floor never changes the exponent of its argument unless it returns 0. About 20 lines of C are required.
The following timing is with double and gcc -O3, with timing loops over representative inputs into which the operative code was inlined.
fmod: 41.8 ns
modf: 19.6 ns
floor: 10.6 ns
With -ffast-math:
fmod: 26.2 ns
modf: 30.0 ns
floor: 21.9 ns
Bit manipulation:
direct: 18.0 ns
floor: 20.6 ns
The manual implementations are competitive, but the floor technique is the best. Oddly, two of the three library functions perform better without -ffast-math: that is, as a PLT function call than as an inlined builtin function.
I'm adding this answer to my own question since it describes the, at the time of writing, best solution I have found. It's in Swift 4.1 (should be straight forward to translate into C) and it's been tested in various use cases:
extension BinaryFloatingPoint {
/// Returns the value after restricting it to the periodic boundary
/// condition [0, 1).
/// See https://forums.swift.org/t/why-no-fraction-in-floatingpoint/10337
#_transparent
func wrappedToUnitRange() -> Self {
let fract = self - self.rounded(.down)
// Have to clamp to just below 1 because very small negative values
// will otherwise return an out of range result of 1.0.
// Turns out this:
if fract >= 1.0 { return Self(1).nextDown } else { return fract }
// is faster than this:
//return min(fract, Self(1).nextDown)
}
#_transparent
func wrapped(to range: Range<Self>) -> Self {
let measure = range.upperBound - range.lowerBound
let recipMeasure = Self(1) / measure
let scaled = (self - range.lowerBound) * recipMeasure
return scaled.wrappedToUnitRange() * measure + range.lowerBound
}
#_transparent
func wrappedIteratively(to range: Range<Self>) -> Self {
var v = self
let measure = range.upperBound - range.lowerBound
while v >= range.upperBound { v = v - measure }
while v < range.lowerBound { v = v + measure }
return v
}
}
On my MacBook Pro with a 2 GHz Intel Core i7,
a hundred million (probably inlined) calls to wrapped(to range:) on random (finite) Double values takes 0.6 seconds, which is about 166 million calls per second (not multi threaded). The range being statically known or not, or having bounds or measure that is a power of two etc, can make some difference but not as much as one could perhaps have thought.
wrappedToUnitRange() takes about 0.2 seconds, meaning 500 million calls per second on my system.
Given the right scenario, wrappedIteratively(to range:) is as fast as wrappedToUnitRange().
The timings have been made by comparing a baseline test (without wrapping some value, but still using it to compute eg a simple xor checksum) to the same test where a value is wrapped. The difference in time between these are the times I have given for the wrapping calls.
I have used Swift development toolchain 2018-02-21, compiling with -O -whole-module-optimization -static-stdlib -gnone. And care has been taken to make the tests relevant, ie preventing dead code removal, using true random input of different distributions etc. Writing the wrapping functions generically, like this extension on BinaryFloatingPoint, turned out to be optimized into equivalent code as if I had written separate specialized versions for eg Float and Double.
It would be interesting to see someone more skilled than me investigating this further (C or Swift or any other language doesn't matter).
EDIT:
For anyone interested, here is some versions for simd float2:
extension float2 {
#_transparent
func wrappedInUnitRange() -> float2 {
return simd.fract(self)
}
#_transparent
func wrappedToMinusOneToOne() -> float2 {
let scaled = (self + float2(1, 1)) * float2(0.5, 0.5)
let scaledFract = scaled - floor(scaled)
let wrapped = simd_muladd(scaledFract, float2(2, 2), float2(-1, -1))
// Note that we have to make sure the result is not out of bounds, like
// simd fract does:
let oneNextDown = Float(bitPattern:
0b0_01111110_11111111111111111111111)
let oneNextDownFloat2 = float2(oneNextDown, oneNextDown)
return simd.min(wrapped, oneNextDownFloat2)
}
#_transparent
func wrapped(toLowerBound lowerBound: float2,
upperBound: float2) -> float2
{
let measure = upperBound - lowerBound
let recipMeasure = simd_precise_recip(measure)
let scaled = (self - lowerBound) * recipMeasure
let scaledFract = scaled - floor(scaled)
// Note that we have to make sure the result is not out of bounds, like
// simd fract does:
let wrapped = simd_muladd(scaledFract, measure, lowerBound)
let maxX = upperBound.x.nextDown // For some reason, this won't be
let maxY = upperBound.y.nextDown // optimized even when upperBound is
// statically known, and there is no similar simd function available.
let maxValue = float2(maxX, maxY)
return simd.min(wrapped, maxValue)
}
}
I asked some related simd-related questions here which might be of interest.
EDIT2:
As can be seen in the above Swift Forums thread:
// Note that tiny negative values like:
let x: Float = -1e-08
// May produce results outside the [0, 1) range:
let wrapped = x - floor(x)
print(wrapped < 1.0) // false
// which may result in out-of-bounds table accesses
// in common usage, so it's probably better to use:
let correctlyWrapped = simd_fract(x)
print(correctlyWrapped < 1.0) // true
I have since updated the code to account for this.
I am a newbie to programming .here I have been solving a simple problem in functional programming (OZ) which is finding the sum of the Digits of a 6 digit positive integer.
Example:- if n = 123456 then
output = 1+2+3+4+5+6 which is 21.
here I found a solution like below
fun {SumDigits6 N}
{SumDigits (N Div 1000) + SumDigits (N mod 1000)}
end
and it says the argument (N Div 1000) gives first 3 digits and the argument (N mod 1000) gives us last 3 digits. and yes I getting the correct solution but my Doubt is how could they giving correct solutions. I mean in given example isn't (N Div 1000) of 123456 gives 123 right not 1+2+3 and similarly (N mod 1000) of123456 gives us 456 not 4+5+6 right ?. in that case, the answer should be 123+456 which is equals to 579 not 21 right ? what Iam missing here.I apologize for asking such simple question but any help would be appreciated.
Thank you :)
You are missing the most important thing here.
It is supposed to happen in a loop and each time the value of N changes.
For example
in the first iteration
the Div gives 1 and mod gives 6 so you add 1 and 6 and store the result and the number is also modified (it becomes 2345)
in the second iteration
the div gives 2 and mod gives 5 you add 2+5+previous result and the number is also modified..
This goes on till the number becomes zero
Your function is recursive, so every time the number get smaller, untill is just 0, then it goes back summing all the partial result. You can do it with an accumulator to store the result, in this simple way:
declare
fun {SumDigit N Accumulator}
if N==0 then Accumulator
else {SumDigit (N div 10) Accumulator+(N mod 10)}
end
end
{Browse {SumDigit 123456 0}}
i think the most elegant way is the function --
static int SumOfDigit(int n)
{
if (n < 10) return n;
return SumOfDigit(SumOfDigit(n/10)+n%10);
}
simple and true :-)
int main()
{
int n,m,d,s=0;
scanf("%d",&n);
m=n;
while(m!=0)
{
d=m%10;
s=s+d;
m=m/10;
}
printf("Sum of digits of %d is %d",n,s);
}
Edit: This was originally on programmers.stackexchange.com, but since I already had some code I thought it might be better here.
I am trying to generate a random math problem/equation. After researching the best (and only) thing I found was this question: Generating random math expression. Because I needed all 4 operations, and the app that I will be using this in is targeted at kids, I wanted to be able to insure that all numbers would be positive, division would come out even, etc, I decided to use a tree.
I have the tree working, and it can generate an equation as well as evaluate it to a number. The problem is that I am having trouble getting it to only use parentheses when needed. I have tried several solutions, that primaraly involve:
Seeing if this node is on the right of the parent node
Seeing if the node is more/less important then it's parent node (* > +, etc).
Seeing if the node & it's parent are of the same type, and if so if the order matters for that operation.
Not that it matters, I am using Swift, and here is what I have so far:
func generateString(parent_node:TreeNode) -> String {
if(self.is_num){
return self.equation!
}
var equation = self.equation!
var left_equation : String = self.left_node!.generateString(self)
var right_equation : String = self.right_node!.generateString(self)
// Conditions for ()s
var needs_parentheses = false
needs_parentheses = parent_node.importance > self.importance
needs_parentheses = (
needs_parentheses
||
(
parent_node.right_node?.isEqualToNode(self)
&&
parent_node.importance <= self.importance
&&
(
parent_node.type != self.type
&&
( parent_node.order_matters != true || self.order_matters != true )
)
)
)
needs_parentheses = (
needs_parentheses
&&
(
!(
self.importance > parent_node.importance
)
)
)
if (needs_parentheses) {
equation = "(\(equation))"
}
return equation.stringByReplacingOccurrencesOfString("a", withString: left_equation).stringByReplacingOccurrencesOfString("b", withString: right_equation)
}
I have not been able to get this to work, and have been banging my head against the wall about it.
The only thing I could find on the subject of removing parentheses is this: How to get rid of unnecessary parentheses in mathematical expression, and I could not figure out how to apply it to my use case. Also, I found this, and I might try to build a parser (using PEGKit), but I wanted to know if anybody had a good idea to either determine where parentheses need to go, or put them everywhere and remove the useless ones.
EDIT: Just to be clear, I don't need someone to write this for me, I am just looking for what it needs to do.
EDIT 2: Since this app will be targeted at kids, I would like to use the least amount of parentheses possible, while still having the equation come out right. Also, the above algorithm does not put enough parentheses.
I have coded a python 3 pgrm that does NOT use a tree, but does successfully generate valid equations with integer solutions and it uses all four operations plus parenthetical expressions. My target audience is 5th graders practicing order of operations (PEMDAS).
872 = 26 * 20 + (3 + 8) * 16 * 2
251 = 256 - 3 - 6 + 8 / 2
367 = 38 * 2 + (20 + 15) + 16 ** 2
260 = 28 * 10 - 18 - 4 / 2
5000 = 40 * 20 / 4 * 5 ** 2
211 = 192 - 10 / 2 / 1 + 24
1519 = 92 * 16 + 6 + 25 + 16
If the python of any interest to you ... I am working on translating the python into JavaScript and make a web page available for students and teachers.