Why does -3^2*(-3)^2 = -81 - math

Why does -3^2*(-3)^2 = -81 ?
Whereas both -3^2 and (-3)^2 both equals 9. I've run the equation in wolfram alpha and it does indeed give the results stated above.

See https://en.wikipedia.org/wiki/Order_of_operations
The order of operations, which is used throughout mathematics,
science, technology and many computer programming languages, is
expressed here:
exponentiation and root extraction
multiplication and division
addition and subtraction
So 1 + 2 × 3 = 7
and 0 - 3^2 = -9
See specifically
https://en.wikipedia.org/wiki/Order_of_operations#Unary_minus_sign
There are differing conventions concerning the unary operator −
(usually read "minus"). In written or printed mathematics, the
expression −3^2 is interpreted to mean −(3^2) = −9.
In some applications and programming languages, notably Microsoft
Excel, PlanMaker (and other spreadsheet applications) and the
programming language bc, unary operators have a higher priority than
binary operators, that is, the unary minus has higher precedence than
exponentiation, so in those languages −3^2 will be interpreted as
(−3)^2 = 9.

From the link from a comment from FheFungusAmongsUs
−x^2, in every mathematical context I have seen, always means −(x^2). So −3^2=−9.
Link

Related

Postfix notation is changing precedence of parentheses in evalution

In a/b*(c+(d-e)) Infix notation (d-e) will be evaluated first but if we convert it into Post-fix ab/cde-+* then ab/ will be evaluated first.
why ab/ is evaluating first in post-fix instead of d-e?
Multiplication and division are left-associative, meaning they are evaluated from left to right. Since a and b are terminals (no further evaluation needs to be done), ab/ is ready to be evaluated. Once we get to the last term, c+(d-e), we need to delve deeper and only then do we evaluate de-.
When you talk about "precedence" (a concept which is designed to disambiguate infix notation hence not really applicable to postfix notation) you really seem to mean "order of operations", which is a broader notion.
One thing to realize is that the order of operations taught in elementary school (often with the pneumonic PEMDAS) isn't necessarily the order of operations that a computer will use when evaluating an expression like a/b*(c+(d-e)). Using PEMDAS, you would first calculate d-e then c+(d-e) etc., which is a different order than that implicit in ab/cde-+*. But, it is interesting to note that many programming languages will in fact evaluate a/b*(c+(d-e)) by using the order of ab/cde-+* rather than by a naive implementation of PEMDAS. As an example, if in Python you import the module dis and evaluate dis.dis("a/b*(c+(d-e))") to disassemble a/b*(c+(d-e)) into Python byte code you get:
0 LOAD_NAME 0 (a)
2 LOAD_NAME 1 (b)
4 BINARY_TRUE_DIVIDE
6 LOAD_NAME 2 (c)
8 LOAD_NAME 3 (d)
10 LOAD_NAME 4 (e)
12 BINARY_SUBTRACT
14 BINARY_ADD
16 BINARY_MULTIPLY
18 RETURN_VALUE
which is easily seen to be exactly the same order of operations as ab/cde-+*. In fact, this postfix notation can be thought of as shorthand for the stack-based computation that Python uses when it evaluates a/b*(c+(d-e))
Both evaluation orders execute the same operations with the same arguments and produce the same result, so in that sense the difference doesn't matter.
There are differences that matter, though. The grade-school arithmetic order is never used when infix expressions are evaluated in practice, because:
It requires more intermediate results to be stored. (a+b)(c+d)(e+f)*(g+h) requires 4 intermediate sums to be stored in grade-school order, but only 2 in the usual order.
It's actually more complicated to implement the grade-school order in most cases; and
When sub-expressions have side-effects, the order becomes important and the usual order is easier for programmers to reason about.

Decimal expansions using lazy sequences in Clojure

Is there a package which represents decimal expansions in Clojure using lazy sequences?
For example, syntax like
(defn r `(B N x_1 x_2 x_3 ...))
could represent a real number r in base B, with decimal expansion (in math notation)
r = N . x_1 x_2 x_3 ...
with integer significand N and decimal digits 0 ≤ x_i ≤ B-1.
If the type were "smart" enough, it could handle different decimal expansions of real numbers as valid inputs, such as (10 0 9 9 9 ...) and (10 1), and consistently output decimal expansions in the latter form. It should also be able to handle overflowing digits, like reducing (10 0 15) to (10 1 5).
Is there any obstruction to working with a lazy-sequence representation of real numbers instead of the usual decimal expansion? I don't know how efficient it would be in contrast to floating-point, but it would be convenient for doing rigorous precise arithmetic involving real numbers. For example, I think there are algorithms which recursively compute the decimal expansions of π and e.
TL;DR
The short answer is that no, there is no such library and I doubt that there will ever be one. It is possible to compute numbers to accuracy greater than IEEE double precision, but to do so by representation as a sequence of single digits is immensely wasteful in terms of memory and impossible to do entirely lazily in general case. For instance, compute (+ '(0 9 0 ... ) '(0 9 1 ...)) lazily by terms.
The Long Version
When "computing" (approximating) the value of a real number or expression to machine precision, the operation computed is the taylor series expansion of the desired expression to N terms, until that the value of the N+1th term is less than machine precision at which point the approximation is aborted because the hardware convention cannot represent more information.
Typically you will only see the 32 and 64 bit IEEE floating point standards, however the IEEE floating point specification extends out to a whopping 128 bits of representation.
For the sake of argument, let's assume that someone extends clojure.core.math to have some representation arbitrary-precision-number, being a software floating point implementation against a backing ByteArray which through a protocol appears for all intents and purposes to be a normal java.lang.Number. All that this representation achieves is to push the machine epsilon (representational error limit) out even lower than the 5x10e-16 bound offered by IEEE DOUBLE/64. Building such a software floating point system is entirely viable and relatively well explored. However I am not aware of a Java/Clojure implementation thereof.
True arbitrary precision is not possible because we have finite memory machines to build upon, therefore at some point we must compromise on performance, memory and precision. Given some library which can correctly and generally represent an arbitrary taylor series evaluation as a sequence of decimal digits at some point, I claim that the overwhemling majority of operations on such arbitrary numbers will be truncated to some precision P either due to the need to perform comparison against a fixed precision representation such as a float or double because they are the industry standards for floating point representation.
To blow this well and truly out of the water, at a distance of 1 light-year an angular deviation of 1e-100 degrees would result in a navigational error of approximately 1.65117369558e-86 meteres. This means that the existing machine epsilon of 5x10e-16 with IEEE DOUBLE/64 is entirely acceptable even for interstellar navigation.
As you mentioned computing the decimal terms of Pi or other interesting series as a lazy sequence, here one could achieve headway only because the goal is the representation and investigation of a series/sequence rather than the addition, subtraction, multiplication and soforth between two or more such representations.

Is this language decidable?

I'm struggling with whether or not this is decidable:
A = {x is an element of the set of Natural Numbers | for every y greater than x, 2y is the sum of two primes}
I'm inclined to think that this is decidable given the fact that when fed into a Turing Machine, it will never reach an accept state and loop for infinity unless it rejects. However, I also do know that for a language to be decidable, there must only exist an algorithm to decide it; we don't necessarily have to know how it's done. With this, part of me think that it is decidable? Does anyone know how to prove either?
This language is decidable, though the proof is a bit evil.
For starters, let's think about the properties of this language. Clearly, if n is a natural number contained in the language, then every number greater than n is also in the language. Thus there are three possible forms this language can take:
This language contains all natural numbers, or
This language contains no natural numbers, or
This language contains all natural numbers greater than some natural number n.
Languages (1) and (2) are, respectively, {0, 1}* and the empty language, both of which are decidable (so there are TMs that always halt that accept those languages). Every language of form (3) is also decidable, because for any n we can easily write a TM with n hardcoded into it that simply checks whether the input is at least n. Consequently, no matter which case is true (either 1, 2, or 3), there exists some TM that always halts whose language is the language you've provided, so your language is decidable.
But that said, this proof is nonconstructive. We can show that the language has to be decidable, but we can't actually find the TM that always halts that accepts it! In fact, no one knows which TM it is, because Goldbach's Conjecture (whether every even number greater than two is the sum of two primes) is an open problem in mathematics.
Hope this helps!

Why does Gnu Octave have negative zeroes?

This is an odd one I'm puzzled about. I recently noticed at the Gnu Octave prompt, it's possible to enter in negative zeroes, like so:
octave:2> abomination = -0
And it remembers it, too:
octave:3> abomination
abomination = -0
In the interest of sanity, negative zero does equal regular zero. But I also noticed that the sign has some other effects. Like these:
octave:6> 4 * 0
ans = 0
octave:7> 4 * -0
ans = -0
octave:8> 4 / 0
warning: division by zero
ans = Inf
octave:9> 4 / -0
warning: division by zero
ans = -Inf
As one can see, the sign is preserved through certain operations. But my question is why. This seems like a radical departure from standard mathematics, where zero is essentially without sign. Are there some attractive mathematical properties for having this? Does this matter in certain fields of mathematics?
FYI: Matlab, which octave is modeled after, does not have negative zeros. Any attempts to use them are treated as regular zeros.
EDIT:
Matlab does have negative zeros, but they are not displayed in the default output.
Signed zero are part of the IEEE-754 formats, and their semantics are completely specified by those formats. They turn out to be quite useful, especially when dealing with complex branch cuts and transformations of the complex plane (see many of W. Kahan's writings on the subject for more details, such as the classic "Branch Cuts for Complex Elementary Functions, or Much Ado about Nothing's Sign Bit").
Short version: negative zero is often a good thing to have in numerical calculations, and programs that try to protect users from encountering it are often doing them a disservice. FWIW, MATLAB does seem to use negative zero as well, but since it prints numbers using the host's printf routine, they display the same as positive zero on Windows.
See this discussion on the MATLAB forums for more details on signed zero in MATLAB.
IEEE-754 floating point numbers have this property too. It might come in handy for limits and infinities. For example, the limit of 1/x with x → +∞ is 0, but the function approaches from the positive side of the axis, with x → −∞ the function approaches from the negative side so one might give the limit as −0, in that case.
Signed Zero
Signed zero echoes the mathematical
analysis concept of approaching 0 from
below as a one-sided limit, which may
be denoted by x → 0−, x → 0−, or x →
↑0. The notation "−0" may be used
informally to denote a negative number
that has been rounded to zero. The
concept of negative zero also has some
theoretical applications in
statistical mechanics and other
disciplines.

A little diversion into floating point (im)precision, part 1

Most mathematicians agree that:
eπi + 1 = 0
However, most floating point implementations disagree. How well can we settle this dispute?
I'm keen to hear about different languages and implementations, and various methods to make the result as close to zero as possible. Be creative!
It's not that most floating point implementations disagree, it's just that they cannot get the accuracy necessary to get a 100% answer. And the correct answer is that they can't.
PI is an infinite series of digits that nobody has been able to denote by anything other than a symbolic representation, and e^X is the same, and thus the only way to get to 100% accuracy is to go symbolic.
Here's a short list of implementations and languages I've tried. It's sorted by closeness to zero:
Scheme: (+ 1 (make-polar 1 (atan 0 -1)))
⇒ 0.0+1.2246063538223773e-16i (Chez Scheme, MIT Scheme)
⇒ 0.0+1.22460635382238e-16i (Guile)
⇒ 0.0+1.22464679914735e-16i (Chicken with numbers egg)
⇒ 0.0+1.2246467991473532e-16i (MzScheme, SISC, Gauche, Gambit)
⇒ 0.0+1.2246467991473533e-16i (SCM)
Common Lisp: (1+ (exp (complex 0 pi)))
⇒ #C(0.0L0 -5.0165576136843360246L-20) (CLISP)
⇒ #C(0.0d0 1.2246063538223773d-16) (CMUCL)
⇒ #C(0.0d0 1.2246467991473532d-16) (SBCL)
Perl: use Math::Complex; Math::Complex->emake(1, pi) + 1
⇒ 1.22464679914735e-16i
Python: from cmath import exp, pi; exp(complex(0, pi)) + 1
⇒ 1.2246467991473532e-16j (CPython)
Ruby: require 'complex'; Complex::polar(1, Math::PI) + 1
⇒ Complex(0.0, 1.22464679914735e-16) (MRI)
⇒ Complex(0.0, 1.2246467991473532e-16) (JRuby)
R: complex(argument = pi) + 1
⇒ 0+1.224606353822377e-16i
Is it possible to settle this dispute?
My first thought is to look to a symbolic language, like Maple. I don't think that counts as floating point though.
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Perhaps a better example is sin(π) = 0? (Or have I missed the point again?)
I agree with Ryan, you would need to move to another number representation system. The solution is outside the realm of floating point math because you need pi to represented as an infinitely long decimal so any limited precision scheme just isn't going to work (at least not without employing some kind of fudge-factor to make up the lost precision).
Your question seems a little odd to me, as you seem to be suggesting that the Floating Point math is implemented by the language. That's generally not true, as the FP math is done using a floating point processor in hardware. But software or hardware, floating point will always be inaccurate. That's just how floats work.
If you need better precision you need to use a different number representation. Just like if you're doing integer math on numbers that don't fit in an int or long. Some languages have libraries for that built in (I know java has BigInteger and BigDecimal), but you'd have to explicitly use those libraries instead of native types, and the performance would be (sometimes significantly) worse than if you used floats.
#Ryan Fox In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Native complex data types are far from unknown. Fortran had it by the mid-sixties, and the OP exhibits a variety of other languages that support them in hist followup.
And complex numbers can be added to other languages as libraries (with operator overloading they even look just like native types in the code).
But unless you provide a special case for this problem, the "non-agreement" is just an expression of imprecise machine arithmetic, no? It's like complaining that
float r = 2/3;
float s = 3*r;
float t = s - 2;
ends with (t != 0) (At least if you use an dumb enough compiler)...
I had looooong coffee chats with my best pal talking about Irrational numbers and the diference between other numbers. Well, both of us agree in this different point of view:
Irrational numbers are relations, as functions, in a way, what way? Well, think about "if you want a perfect circle, give me a perfect pi", but circles are diferent to the other figures (4 sides, 5, 6... 100, 200) but... How many more sides do you have, more like a circle it look like. If you followed me so far, connecting all this ideas here is the pi formula:
So, pi is a function, but one that never ends! because of the ∞ parameter, but I like to think that you can have "instance" of pi, if you change the ∞ parameter for a very big Int, you will have a very big pi instance.
Same with e, give me a huge parameter, I will give you a huge e.
Putting all the ideas together:
As we have memory limitations, the language and libs provide to us huge instance of irrational numbers, in this case, pi and e, as final result, you will have long aproach to get 0, like the examples provided by #Chris Jester-Young
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
In a language that doesn't have a native representation, it is usually added using OOP to create a Complex class to represent i and j, with operator overloading to properly deal with operations involving other Complex numbers and or other number primitives native to the language.
Eg: Complex.java, C++ < complex >
Numerical Analysis teaches us that you can't rely on the precise value of small differences between large numbers.
This doesn't just affect the equation in question here, but can bring instability to everything from solving a near-singular set of simultaneous equations, through finding the zeros of polynomials, to evaluating log(~1) or exp(~0) (I have even seen special functions for evaluating log(x+1) and (exp(x)-1) to get round this).
I would encourage you not to think in terms of zeroing the difference -- you can't -- but rather in doing the associated calculations in such a way as to ensure the minimum error.
I'm sorry, it's 43 years since I had this drummed into me at uni, and even if I could remember the references, I'm sure there's better stuff around now. I suggest this as a starting point.
If that sounds a bit patronising, I apologise. My "Numerical Analysis 101" was part of my Chemistry course, as there wasn't much CS in those days. I don't really have a feel for the place/importance numerical analysis has in a modern CS course.
It's a limitation of our current floating point computational architectures. Floating point arithmetic is only an approximation of numeric poles like e or pi (or anything beyond the precision your bits allow). I really enjoy these numbers because they defy classification, and appear to have greater entropy(?) than even primes, which are a canonical series. A ratio defy's numerical representation, sometimes simple things like that can blow a person's mind (I love it).
Luckily entire languages and libraries can be dedicated to precision trigonometric functions by using notational concepts (similar to those described by Lasse V. Karlsen ).
Consider a library/language that describes concepts like e and pi in a form that a machine can understand. Does a machine have any notion of what a perfect circle is? Probably not, but we can create an object - circle that satisfies all the known features we attribute to it (constant radius, relationship of radius to circumference is 2*pi*r = C). An object like pi is only described by the aforementioned ratio. r & C can be numeric objects described by whatever precision you want to give them. e can be defined "as the e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is exactly 1" from wikipedia.
Fun question.

Resources