Python .1 - .1 = extremely small number w/negative exponent? - math

This has got to be a well-traveled gotcha of some sort. Define the following function foo():
>>> def foo():
... x = 1
... while x != 0:
... x -= .1
... if x < 0:
... x = 0
... print x
So of course, when we call the function, we get exactly what we expect to get.
>>> foo()
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1.38777878078e-16 # O_o
0
So, I know that math with integers vs. floating point numbers can get a little weird. Just typing 3 - 2.9 yields such an answer:
>>> 3 - 2.9
0.10000000000000009
So, in fairness -- this is not causing an issue in the script I'm mucking about with. But surely this creeps up and bites people who would actually be affected by values as astronomically small as 1.38777878078e-16. And in order to prevent there from ever being an issue because of the strangely small number, I've got this gem sitting at the bottom of my controller du jour:
if (x < .1 and x > 0) or x < 0:
x = 0
That can't be the solution... unless it totally is. So... is it? If not, what's the trick here?

This can certainly "creep up and bite people", generally when they try to compare floats:
>>> a = 1 / 10
>>> b = 0.6 - 0.5
>>> a == b
False
Therefore it is common to compare floats using a tolerance:
>>> tolerance = 0.000001
>>> abs(a - b) < tolerance
True

This program:
def foo():
x = 1
while x != 0:
x -= .1
if x < 0:
x = 0
print '%.20f' % x
foo()
prints out this:
0.90000000000000002220
0.80000000000000004441
0.70000000000000006661
0.60000000000000008882
0.50000000000000011102
0.40000000000000013323
0.30000000000000015543
0.20000000000000014988
0.10000000000000014433
0.00000000000000013878
0.00000000000000000000
You were not printing the numbers out with enough precision to see what was actually going on. Compare this with the output of print '%.20f' % x when you explicitly set x to 0.9 and 0.8 and so forth. You may want to pay particular attention to the result for 0.5.

You miss the point - that isn't something you want to have a workaround for. Just trust VM, and assume, that it does all the computations as they should be done.
What you want to do, is to format your number. Spot the difference between value, and its representation.
>>> x = 0.9
>>> while x>0.1:
... x -= 0.1
...
>>> x
1.3877787807814457e-16
>>> "{:.2f}".format(x)
'0.00'
Here you have example of showing you value with 2 decimal points. More on formatting (number formatting too) you'll find HERE

Related

Is it possible to find a few common multiples of a list of numbers, without them having to be integers?

I don't even know if something like this is possible, but:
Let us say we have three numbers:
A = 6
B = 7.5
C = 24
I would like to find a few evenly spaced common multiples of these numbers between 0 and 2.
So the requirement is: one_of_these_numbers / common_multiple = an_integer (or almost an integer with a particular tolerance)
For example, a good result would be [0.1 , 0.5 , 1 , 1.5]
I have no idea if this is possible, because one can not iterate through a range of floats, but is there a smart way to do it?
I am using python, but a solution could be represented in any language of your preference.
Thank you for your help!
While I was writing my question, I actually came up with an idea for the solution.
To find common divisors using code, we have to work with integers.
My solution is to multiply all numbers by a factor = 1, 10, 100, ...
so that we can act as if they are integers, find their integer common divisors, and then redivide them by the factor to get a result.
Better explained in code:
a = 6
b = 7.5
c = 24
# Find a few possible divisors between 0 and 2 so that all numbers are divisible
by div.
# We define a function that finds all divisors in a range of numbers, supposing
all numbers are integers.
def find_common_divisors(numbers, range_start, range_end):
results = []
for i in range(range_start + 1, range_end + 1):
if all([e % i == 0 for e in numbers]):
results.append(i)
return results
def main():
nums = [a, b, c]
range_start = 0
range_end = 2
factor = 1
results = [1]
while factor < 11:
nums_i = [e * factor for e in nums]
range_end_i = range_end * factor
results += [e / factor for e in find_common_divisors(nums_i, range_start, range_end_i)]
factor *= 10
print(sorted(set(results)))
if __name__ == '__main__':
main()
For these particular numbers, I get the output:
[0.1, 0.3, 0.5, 1, 1.5]
If we need more results, we can adjust while factor < 11: to a higher number than 11 like 101.
I am curious to see if I made any mistake in my code.
Happy to hear some feedback.
Thank you!

Google Foobar Fuel Injection Perfection

Problem:
Fuel Injection Perfection
Commander Lambda has asked for your help to refine the automatic quantum antimatter fuel injection system for her LAMBCHOP doomsday device. It's a great chance for you to get a closer look at the LAMBCHOP - and maybe sneak in a bit of sabotage while you're at it - so you took the job gladly.
Quantum antimatter fuel comes in small pellets, which is convenient since the many moving parts of the LAMBCHOP each need to be fed fuel one pellet at a time. However, minions dump pellets in bulk into the fuel intake. You need to figure out the most efficient way to sort and shift the pellets down to a single pellet at a time.
The fuel control mechanisms have three operations:
Add one fuel pellet Remove one fuel pellet Divide the entire group of fuel pellets by 2 (due to the destructive energy released when a quantum antimatter pellet is cut in half, the safety controls will only allow this to happen if there is an even number of pellets) Write a function called solution(n) which takes a positive integer as a string and returns the minimum number of operations needed to transform the number of pellets to 1. The fuel intake control panel can only display a number up to 309 digits long, so there won't ever be more pellets than you can express in that many digits.
For example: solution(4) returns 2: 4 -> 2 -> 1 solution(15) returns 5: 15 -> 16 -> 8 -> 4 -> 2 -> 1
Test cases
Inputs: (string) n = "4" Output: (int) 2
Inputs: (string) n = "15" Output: (int) 5
my code:
def solution(n):
n = int(n)
if n == 2:
return 1
if n % 2 != 0:
return min(solution(n + 1), solution(n - 1)) + 1
else:
return solution(int(n / 2)) + 1
This is the solution that I came up with with passes 4 out of 10 of the test cases. It seems to be working fine so im wondering if it is because of the extensive runtime. I thought of applying memoization but im not sure how to do it(or if it is even possible). Any help would be greatly appreciated :)
There are several issues to consider:
First, you don't handle the n == "1" case properly (operations = 0).
Next, by default, Python has a limit of 1000 recursions. If we compute the log2 of a 309 digit number, we expect to make a minimum of 1025 divisions to reach 1. And if each of those returns an odd result, we'd need to triple that to 3075 recursive operations. So, we need to bump up Python's recursion limit.
Finally, for each of those divisions that does return an odd value, we'll be spawning two recursive division trees (+1 and -1). These trees will not only increase the number of recursions, but can also be highly redundant. Which is where memoization comes in:
import sys
from functools import lru_cache
sys.setrecursionlimit(3333) # estimated by trial and error
#lru_cache()
def solution(n):
n = int(n)
if n <= 2:
return n - 1
if n % 2 == 0:
return solution(n // 2) + 1
return min(solution(n + 1), solution(n - 1)) + 1
print(solution("4"))
print(solution("15"))
print(solution(str(10**309 - 1)))
OUTPUT
> time python3 test.py
2
5
1278
0.043u 0.010s 0:00.05 100.0% 0+0k 0+0io 0pf+0w
>
So, bottom line is handle "1", increase your recursion limit, and add memoization. Then you should be able to solve this problem easily.
There are more memory- and runtime-efficient ways to solve the problem, which is what Google is testing for with their constraints. Every time you recurse a function, you put another call on the stack, or 2 calls when you recurse twice on each function call. While they seem basic, a while loop was a lot faster for me.
Think of the number in binary - when ever you have a streak of 1s >1 in length at LSB side of the number, it makes sense to add 1 (which will flip that streak to all 0s but add another bit to the overall length), then shift right until you find another 1 in the LSB position. You can solve it in a fixed memory block in O(n) using just a while loop.
If you don't want or can't use functools, you can build your own cache this way :
cache = {}
def solution_rec(n):
n = int(n)
if n in cache:
return cache[n]
else:
if n <= 1:
return 0
if n == 2:
return 1
if n % 2 == 0:
div = n / 2
cache[div] = solution(div)
return cache[div] + 1
else:
plus = n + 1
minus = n - 1
cache[plus] = solution(n + 1)
cache[minus] = solution(n - 1)
return min(cache[plus], cache[minus]) + 1
However, even if it runs much faster and has less recursive calls, it's still too much recursive calls for Python default configuration if you test the 309 digits limit.
it works if you set sys.setrecursionlimit to 1562.
An implementation of #rreagan3's solution, with the exception that an input of 3 should lead to a subtraction rather than an addition even through 3 has a streak of 1's on the LSB side:
def solution(n):
n = int(n)
count = 0
while n > 1:
if n & 1 == 0:
n >>= 1
elif n & 2 and n != 3:
n += 1
else:
n -= 1 # can also be: n &= -2
count += 1
return count
Demo: https://replit.com/#blhsing/SlateblueVeneratedFactor

In Julia: Equality of Float64 and BigFloat

In the Julia 1.0.0 REPL I get the following results:
# Line 1: This make sense. I did not expect a Float64 to equal a BigFloat.
julia> 26.1 == big"26.1"
false
# Line 2: This surprised me when Line 1 is considered. Again, like Line 1, I
# did not expect a Float64 to equal an equivalent BigFloat.
julia> 26.0 == big"26.0"
true
# Line 3: This I expected based on Line 1 behavior.
julia> 26.1 - 0.1 == big"26.1" - 0.1
false
# Line 4: This surprised me based on Line 1 behavior, but it might be
# explained based on Line 2 behavior. It seems to imply that if a Float64
# can be converted to an Integer it will compare equal to an equivalent BigFloat.
julia> 26.1 - 0.1 == big"26.1" - big"0.1"
true
It seems that Julia is doing something under the hood here for equality comparisons with Float64 and BigFloat that makes lines 2 and 4 true, while lines 1 and 3 are false. Any suggestions?
The Julia doc regarding "==" does not seem to cover this kind of thing:
https://docs.julialang.org/en/v1/base/math/#Base.:==
EDIT:
Based on a helpful comment by #EPo below, it is easy to make all comparisons above come out to true. For example, Line 1 and Line 3 are true below, though they were false above:
# Line 1 is now true.
julia> 26.1 ≈ big"26.1"
true
# Line 3 is now true.
julia> 26.1 - 0.1 ≈ big"26.1" - 0.1
true
Some floating point number can be represented exactly (26.0) but not all, for instance:
julia> using Printf
julia> #printf("%.80f",26.0)
26.00000000000000000000000000000000000000000000000000000000000000000000000000000000
julia> #printf("%.80f",0.1)
0.10000000000000000555111512312578270211815834045410156250000000000000000000000000
The decimals 0.5, 0.25, 0.125 for example can be also represented exactly with the binary based floating point representation. So for instance you have:
julia> 26.125 - 0.125 == big"26.125" - 0.125
true
But 0.1 is a periodic number in the binary system, so it is rounded.
julia> bitstring(0.1)
"0011111110111001100110011001100110011001100110011001100110011010"
The last 52 bits represent the fraction in binary. (https://en.wikipedia.org/wiki/Double-precision_floating-point_format)
The reason they are not the same is because they are not the same
julia> using Printf
julia> string(BigFloat("26.1")-BigFloat("26"))
"1.000000000000000000000000000000000000000000000000000000000000000000000000000553e-01"
julia> #printf("%.17e",Float64(26.1)-Float64(26))
1.00000000000001421e-01
julia> Float64(26.1)-Float64(26) > BigFloat("26.1")-BigFloat("26")
true

How to Incorporate a numerical prefactor into a radical in Maxima?

(%i2) x : expand(cosh(1)*sqrt(3+5*t));
(%o2) cosh(1) sqrt(5 t + 3)
(%i3) expand(float(x));
0.5
(%o3) 1.543080634815244 (5.0 t + 3.0)
How can I get Maxima to incorporate the prefactor into the radical? I'm looking for something that in this case yields something like
0.5
(%o3) (11.90548922 t + 7.143293537)
For numbers as small as these this is not a big deal, but for numerical evaluations Maxima tends to substitute rational approximations that may involve very large denominators, so that I end up with expressions where the prefactor is a very small number (like 6.35324353 × 10-23) and the numbers inside the square root are very large numbers (like 5212548545863256475196584785455844385452665612552468), so that it isn't obvious even what the order of magnitude of the result is.
Here's a solution which uses pattern matching.
(%i1) matchdeclare (cc, numberp, [bb, aa], all) $
(%i2) defrule (r1f, cc*bb^0.5, foof(cc,bb));
0.5
(%o2) r1f : bb cc -> foof(cc, bb)
(%i3) defrule (r2f, aa*cc*bb^0.5, aa*foof(cc,bb));
0.5
(%o3) r2f : aa bb cc -> aa foof(cc, bb)
(%i4) foof(a,b):= (expand(a^2*b))^0.5 $
(%i5) apply1 (1.543080634815244*(5.0*t + 3.0)^0.5, r1f, r2f);
0.5
(%o5) (11.90548922770908 t + 7.143293536625449)
(%i6) apply1 (1.543080634815244*x*y*(5.0*t + 3.0)^0.5, r1f, r2f);
0.5
(%o6) (11.90548922770908 t + 7.143293536625449) x y
(%i7) apply1 (1/(1 + 345.43*(2.23e-2*u + 8.3e-4)^0.5), r1f, r2f);
1
(%o7) --------------------------------------------
0.5
(2660.87803327 u + 99.03716446700001) + 1
It took some experimentation to figure out suitable rules r1f and r2f. Note that these rules match ...^0.5 but not sqrt(...) (i.e. exponent = 1/2 instead of 0.5). Of course if you want to match sqrt(...) you can create additional rules for that.
Not guaranteed to work for you -- a rule might match too much or too little. It's worth a try, anyway.

Normalize numbers from 1-.0000X to 1 - 0.0X?

I have range of numbers that range from 1 - 0.00000X . Most are small numbers like 0.000823. How can I map them so that they are closer in range ? I used sqrt method but any other suggestions ?
Update
Example
Numbers between 1-0.1 I don't have problem with them . My problem with numbers below 0.1. I need to bring them closer to 0.1.
.00004 -> 0.0004 or 0.004
0.023 -> 0.05 or 0.09
Have you tried logarithms?
If your numbers satisfy eps < x <= 1, the function
y = 1 - C*log(x) where C = 1/-log(eps)
will map the numbers to a range 0..1. If the range isn't required, only that the numbers are close together, you can drop the scale factor.
Edit:
This can be expressed without a subtraction of course.
y = 1 + C*log(x) where C = 1/log(eps)
For example, with an epsilon of 0.0000000001 (10^-10), you get C = -0.1 and:
0.0000000001 => 0
0.000000001 => 0.1
0.00000001 => 0.2
...
0.1 => 0.9
1 => 1
Edit: If you don't want to change the range from 0.1 ... 1.0 but only smaller numbers, then just scale the range from 0 ... 0.1. This can be done by multiplying x with 10 before the function is applied, and divide again by 10 after. Of course in this case use the scale function only if the value is less than 0.1.
Well, a simple way would be to calculate the minimal one (say, 1-t), and remap the segment [1-t, 1] to [0, 1]. The mapping function could be linear:
xnew = (xold - 1) / t + 1
(of course t = 1 - min value)

Resources