How should complex numbers be rendered? - math

Mathematics naive question:
What is the "canonical" way to represent 14+1i?
14+i1
or
14+i
Similarly, is it likely, in the 'real world', that scientific notation is going to creep into a complex number so as to freak out a complex numbers parser? For example,
1.2345E+02-1.7002E-09i
Edit: Finally, is it
8.45358210351126e+066i
or
8.45358210351126e+66i
i.e. does one zero file to three digits on the imaginary?

No problems with MATLAB:
>> 5+i
ans =
5.0000 + 1.0000i
>> 5+1i
ans =
5.0000 + 1.0000i
>> 1.2345E+02-1.7002E-09i
ans =
1.2345e+002 -1.7002e-009i
I think this shows that scientific notation ("E") in complex numbers is handled pretty well in the "real world"... to the extent that MATLAB is an influential part of that world =)

My preference would be:
14 + i
Somehow it's more pleasing to my eyes than 14 + 1i.

I would represent your first example as:
14 + 1i
And I would certainly expect to see scientific notation in complex numbers. For example, Python happily accepts the following (using j as Python requires):
>>> 1.2345E+02-1.7002E-09j
(123.45-1.7002e-09j)

"14 + 1i" is better than "14+i1", but I'd be more likely to say or write "14 + i".
Also, 1.7002E-09i (which I haven't seen in Maths, though no doubt it happens in engineering or something) looks a bit ambiguous without a superscripted font (do you mean 1.7002*(10 ** -9)*i or 1.7002*(10 ** -9*i)?) and therefore (1.7002E-09)i might be better.

Related

R -- Finding non-complex solution from function "polyroot"() with "Im()"

I am having trouble finding the non-complex solution from the complex array provided by polyroot().
coef1 = c(-10000,157.07963267949,0,0.523598775598299)
roots=polyroot(coef1)
returns
##[1] 23.01673- 0.00000i -11.50837+26.40696i -11.50837-26.40696i
and I would like the index that doesn't have an imaginary part. In this case:
roots[1]
## [1] 23.01673-0i
I am going to apply this process in a loop and would like to use Im() to isolate the non-complex solution, however, when I try using:
Im(roots)
## [1] -2.316106e-23 2.640696e+01 -2.640696e+01
and therefore cannot use something like:
which(Im(roots)==0)
which returns
##integer(0)
I am confident there is a real root given the plot from:
plot(function(x) -10000 + 157.07963267949*x + 0.523598775598299*x^3,xlim=c(0,50))
abline(0,0,col='red')
Is there some funny rounding going on? I would prefer a solution that doesn't involve ceiling() or anything similar. Any of you R experts have any ideas? Cheers, guys and gals!
You can use a certain error or threshold:
Re(roots)[abs(Im(roots)) < 1e-6]
[1] 23.01673
Graphically :
curve(-10000 + 157.07963267949*x + 0.523598775598299*x^3,xlim=c(0,50))
abline(0,0,col='red')
real_root <- Re(roots)[abs(Im(roots)) < 1e-6]
text(real_root,1,label=round(real_root,2),adj=c(1,-1),col='blue')
Notice the absurdly small imaginary part of the first root. It's a rounding issue, which are unfortunately common when computers do floating-point operations. Though it's a little inelegant, try
which(round(Im(roots), 12)
which will round it to 12 decimal places (more precision than you probably need).

how to write a formula in c#?

how to write a formula like
v_r (t)=∑_(n=0)^(N-1)▒〖A_r (L_2-L_1 ) e^j(ω_c t-4π/λ (R+υt+L_(1+L_2 )/2 cos⁡〖(θ)sin⁡(ω_r t+2πn/N)))〗 ┤) sinc(4π/λ-L_(2-L_1 )/2 cos⁡(θ) sin⁡(ω_r t+2πn/N))〗
in c#?
You have to convert the formula to something the compiler recognizes.
To it's equivalent using the a combination of basic algebra and the Math class like so:
p = rho*R*T + (B_0*R*T-A_0-((C_0) / (T*T))+((E_0) / (Math.Pow(T, 4))))*rho*rho +
(b*R*T-a-((d) / (T)))*Math.Pow(rho, 3) +
alpha*(a+((d) / (t)))*Math.Pow(rho, 6) +
((c*Math.Pow(rho, 3)) / (T*T))*(1+gamma*rho*rho)*Math.Exp(-gamma*rho*rho);
Example taken from: Converting Math Equations in C#
Well, first you have to figure out what all those symbols mean. I see the sigma which usually indicates sum-of, with ∑_(n=0)^(N-1) probably translating to:
N-1
∑
n=0
This generally means the sum of the following expression where n varies from 0 to N-1. So I gather you'd need a loop there.
The expression to be calculated within that loop consists of a lof of trigonometric-type functions involving π, θ, sin and cos, and the little known sinc which I assume is a typo :-)
The bottom line is that you need to understand the current expression before you can think about converting it to another form like a C# program. Short of knowing where it came from, or a little bit of context, we probably can't help you that much though there's always a possibility that we have a savant/genius here that will recognise that formula off the top of their head.

As a programmer how would you explain imaginary numbers?

As a programmer I think it is my job to be good at math but I am having trouble getting my head round imaginary numbers. I have tried google and wikipedia with no luck so I am hoping a programmer can explain in to me, give me an example of a number squared that is <= 0, some example usage etc...
I guess this blog entry is one good explanation:
The key word is rotation (as opposed to direction for negative numbers, which are as stranger as imaginary number when you think of them: less than nothing ?)
Like negative numbers modeling flipping, imaginary numbers can model anything that rotates between two dimensions “X” and “Y”. Or anything with a cyclic, circular relationship
Problem: not only am I a programmer, I am a mathematician.
Solution: plow ahead anyway.
There's nothing really magical to complex numbers. The idea behind their inception is that there's something wrong with real numbers. If you've got an equation x^2 + 4, this is never zero, whereas x^2 - 2 is zero twice. So mathematicians got really angry and wanted there to always be zeroes with polynomials of degree at least one (wanted an "algebraically closed" field), and created some arbitrary number j such that j = sqrt(-1). All the rules sort of fall into place from there (though they are more accurately reorganized differently-- specifically, you formally can't actually say "hey this number is the square root of negative one"). If there's that number j, you can get multiples of j. And you can add real numbers to j, so then you've got complex numbers. The operations with complex numbers are similar to operations with binomials (deliberately so).
The real problem with complexes isn't in all this, but in the fact that you can't define a system whereby you can get the ordinary rules for less-than and greater-than. So really, you get to where you don't define it at all. It doesn't make sense in a two-dimensional space. So in all honesty, I can't actually answer "give me an exaple of a number squared that is <= 0", though "j" makes sense if you treat its square as a real number instead of a complex number.
As for uses, well, I personally used them most when working with fractals. The idea behind the mandelbrot fractal is that it's a way of graphing z = z^2 + c and its divergence along the real-imaginary axes.
You might also ask why do negative numbers exist? They exist because you want to represent solutions to certain equations like: x + 5 = 0. The same thing applies for imaginary numbers, you want to compactly represent solutions to equations of the form: x^2 + 1 = 0.
Here's one way I've seen them being used in practice. In EE you are often dealing with functions that are sine waves, or that can be decomposed into sine waves. (See for example Fourier Series).
Therefore, you will often see solutions to equations of the form:
f(t) = A*cos(wt)
Furthermore, often you want to represent functions that are shifted by some phase from this function. A 90 degree phase shift will give you a sin function.
g(t) = B*sin(wt)
You can get any arbitrary phase shift by combining these two functions (called inphase and quadrature components).
h(t) = Acos(wt) + iB*sin(wt)
The key here is that in a linear system: if f(t) and g(t) solve an equation, h(t) will also solve the same equation. So, now we have a generic solution to the equation h(t).
The nice thing about h(t) is that it can be written compactly as
h(t) = Cexp(wt+theta)
Using the fact that exp(iw) = cos(w)+i*sin(w).
There is really nothing extraordinarily deep about any of this. It is merely exploiting a mathematical identity to compactly represent a common solution to a wide variety of equations.
Well, for the programmer:
class complex {
public:
double real;
double imaginary;
complex(double a_real) : real(a_real), imaginary(0.0) { }
complex(double a_real, double a_imaginary) : real(a_real), imaginary(a_imaginary) { }
complex operator+(const complex &other) {
return complex(
real + other.real,
imaginary + other.imaginary);
}
complex operator*(const complex &other) {
return complex(
real*other.real - imaginary*other.imaginary,
real*other.imaginary + imaginary*other.real);
}
bool operator==(const complex &other) {
return (real == other.real) && (imaginary == other.imaginary);
}
};
That's basically all there is. Complex numbers are just pairs of real numbers, for which special overloads of +, * and == get defined. And these operations really just get defined like this. Then it turns out that these pairs of numbers with these operations fit in nicely with the rest of mathematics, so they get a special name.
They are not so much numbers like in "counting", but more like in "can be manipulated with +, -, *, ... and don't cause problems when mixed with 'conventional' numbers". They are important because they fill the holes left by real numbers, like that there's no number that has a square of -1. Now you have complex(0, 1) * complex(0, 1) == -1.0 which is a helpful notation, since you don't have to treat negative numbers specially anymore in these cases. (And, as it turns out, basically all other special cases are not needed anymore, when you use complex numbers)
If the question is "Do imaginary numbers exist?" or "How do imaginary numbers exist?" then it is not a question for a programmer. It might not even be a question for a mathematician, but rather a metaphysician or philosopher of mathematics, although a mathematician may feel the need to justify their existence in the field. It's useful to begin with a discussion of how numbers exist at all (quite a few mathematicians who have approached this question are Platonists, fyi). Some insist that imaginary numbers (as the early Whitehead did) are a practical convenience. But then, if imaginary numbers are merely a practical convenience, what does that say about mathematics? You can't just explain away imaginary numbers as a mere practical tool or a pair of real numbers without having to account for both pairs and the general consequences of them being "practical". Others insist in the existence of imaginary numbers, arguing that their non-existence would undermine physical theories that make heavy use of them (QM is knee-deep in complex Hilbert spaces). The problem is beyond the scope of this website, I believe.
If your question is much more down to earth e.g. how does one express imaginary numbers in software, then the answer above (a pair of reals, along with defined operations of them) is it.
I don't want to turn this site into math overflow, but for those who are interested: Check out "An Imaginary Tale: The Story of sqrt(-1)" by Paul J. Nahin. It talks about all the history and various applications of imaginary numbers in a fun and exciting way. That book is what made me decide to pursue a degree in mathematics when I read it 7 years ago (and I was thinking art). Great read!!
The main point is that you add numbers which you define to be solutions to quadratic equations like x2= -1. Name one solution to that equation i, the computation rules for i then follow from that equation.
This is similar to defining negative numbers as the solution of equations like 2 + x = 1 when you only knew positive numbers, or fractions as solutions to equations like 2x = 1 when you only knew integers.
It might be easiest to stop trying to understand how a number can be a square root of a negative number, and just carry on with the assumption that it is.
So (using the i as the square root of -1):
(3+5i)*(2-i)
= (3+5i)*2 + (3+5i)*(-i)
= 6 + 10i -3i - 5i * i
= 6 + (10 -3)*i - 5 * (-1)
= 6 + 7i + 5
= 11 + 7i
works according to the standard rules of maths (remembering that i squared equals -1 on line four).
An imaginary number is a real number multiplied by the imaginary unit i. i is defined as:
i == sqrt(-1)
So:
i * i == -1
Using this definition you can obtain the square root of a negative number like this:
sqrt(-3)
== sqrt(3 * -1)
== sqrt(3 * i * i) // Replace '-1' with 'i squared'
== sqrt(3) * i // Square root of 'i squared' is 'i' so move it out of sqrt()
And your final answer is the real number sqrt(3) multiplied by the imaginary unit i.
A short answer: Real numbers are one-dimensional, imaginary numbers add a second dimension to the equation and some weird stuff happens if you multiply...
If you're interested in finding a simple application and if you're familiar with matrices,
it's sometimes useful to use complex numbers to transform a perfectly real matrice into a triangular one in the complex space, and it makes computation on it a bit easier.
The result is of course perfectly real.
Great answers so far (really like Devin's!)
One more point:
One of the first uses of complex numbers (although they were not called that way at the time) was as an intermediate step in solving equations of the 3rd degree.
link
Again, this is purely an instrument that is used to answer real problems with real numbers having physical meaning.
In electrical engineering, the impedance Z of an inductor is jwL, where w = 2*pi*f (frequency) and j (sqrt(-1))means it leads by 90 degrees, while for a capacitor Z = 1/jwc = -j/wc which is -90deg/wc so that it lags a simple resistor by 90 deg.

What does the term "BODMAS" mean?

What is BODMAS and why is it useful in programming?
http://www.easymaths.com/What_on_earth_is_Bodmas.htm:
What do you think the answer to 2 + 3 x 5 is?
Is it (2 + 3) x 5 = 5 x 5 = 25 ?
or 2 + (3 x 5) = 2 + 15 = 17 ?
BODMAS can come to the rescue and give us rules to follow so that we always get the right answer:
(B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction
According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5 .
Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this:
result = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER;
with BODMAS you can make this a bit clearer:
result = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors?
Just guessing at that part though.
Mike Stone EDIT: Fixed math as Gaius points out
Another version of this (in middle school) was "Please Excuse My Dear Aunt Sally".
Parentheses
Exponents
Multiplication
Division
Addition
Subtraction
The mnemonic device was helpful in school, and still useful in programming today.
Order of operations in an expression, such as:
foo * (bar + baz^2 / foo)
Brackets first
Orders (ie Powers and Square Roots, etc.)
Division and Multiplication (left-to-right)
Addition and Subtraction (left-to-right)
source: http://www.mathsisfun.com/operation-order-bodmas.html
I don't have the power to edit #Michael Stum's answer, but it's not quite correct. He reduces
(i + 4) - (a + b)
to
(i + 4 - a + b)
They are not equivalent. The best reduction I can get for the whole expression is
((i + 4) - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
or
(i + 4 - a - b) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
When I learned this in grade school (in Canada) it was referred to as BEDMAS:
Brackets
Exponents
Division
Multiplication
Addition
Subtraction
Just for those from this part of the world...
I'm not really sure how applicable to programming the old BODMAS mnemonic is anyways. There is no guarantee on order of operations between languages, and while many keep the standard operations in that order, not all do. And then there are some languages where order of operations isn't really all that meaningful (Lisp dialects, for example). In a way, you're probably better off for programming if you forget the standard order and either use parentheses for everything(eg (a*b) + c) or specifically learn the order for each language you work in.
I read somewhere that especially in C/C++ splitting your expressions into small statements was better for optimisation; so instead of writing hugely complex expressions in one line, you cache the parts into variables and do each one in steps, then build them up as you go along.
The optimisation routines will use registers in places where you had variables so it shouldn't impact space but it can help the compiler a little.

A little diversion into floating point (im)precision, part 1

Most mathematicians agree that:
eπi + 1 = 0
However, most floating point implementations disagree. How well can we settle this dispute?
I'm keen to hear about different languages and implementations, and various methods to make the result as close to zero as possible. Be creative!
It's not that most floating point implementations disagree, it's just that they cannot get the accuracy necessary to get a 100% answer. And the correct answer is that they can't.
PI is an infinite series of digits that nobody has been able to denote by anything other than a symbolic representation, and e^X is the same, and thus the only way to get to 100% accuracy is to go symbolic.
Here's a short list of implementations and languages I've tried. It's sorted by closeness to zero:
Scheme: (+ 1 (make-polar 1 (atan 0 -1)))
⇒ 0.0+1.2246063538223773e-16i (Chez Scheme, MIT Scheme)
⇒ 0.0+1.22460635382238e-16i (Guile)
⇒ 0.0+1.22464679914735e-16i (Chicken with numbers egg)
⇒ 0.0+1.2246467991473532e-16i (MzScheme, SISC, Gauche, Gambit)
⇒ 0.0+1.2246467991473533e-16i (SCM)
Common Lisp: (1+ (exp (complex 0 pi)))
⇒ #C(0.0L0 -5.0165576136843360246L-20) (CLISP)
⇒ #C(0.0d0 1.2246063538223773d-16) (CMUCL)
⇒ #C(0.0d0 1.2246467991473532d-16) (SBCL)
Perl: use Math::Complex; Math::Complex->emake(1, pi) + 1
⇒ 1.22464679914735e-16i
Python: from cmath import exp, pi; exp(complex(0, pi)) + 1
⇒ 1.2246467991473532e-16j (CPython)
Ruby: require 'complex'; Complex::polar(1, Math::PI) + 1
⇒ Complex(0.0, 1.22464679914735e-16) (MRI)
⇒ Complex(0.0, 1.2246467991473532e-16) (JRuby)
R: complex(argument = pi) + 1
⇒ 0+1.224606353822377e-16i
Is it possible to settle this dispute?
My first thought is to look to a symbolic language, like Maple. I don't think that counts as floating point though.
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Perhaps a better example is sin(π) = 0? (Or have I missed the point again?)
I agree with Ryan, you would need to move to another number representation system. The solution is outside the realm of floating point math because you need pi to represented as an infinitely long decimal so any limited precision scheme just isn't going to work (at least not without employing some kind of fudge-factor to make up the lost precision).
Your question seems a little odd to me, as you seem to be suggesting that the Floating Point math is implemented by the language. That's generally not true, as the FP math is done using a floating point processor in hardware. But software or hardware, floating point will always be inaccurate. That's just how floats work.
If you need better precision you need to use a different number representation. Just like if you're doing integer math on numbers that don't fit in an int or long. Some languages have libraries for that built in (I know java has BigInteger and BigDecimal), but you'd have to explicitly use those libraries instead of native types, and the performance would be (sometimes significantly) worse than if you used floats.
#Ryan Fox In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Native complex data types are far from unknown. Fortran had it by the mid-sixties, and the OP exhibits a variety of other languages that support them in hist followup.
And complex numbers can be added to other languages as libraries (with operator overloading they even look just like native types in the code).
But unless you provide a special case for this problem, the "non-agreement" is just an expression of imprecise machine arithmetic, no? It's like complaining that
float r = 2/3;
float s = 3*r;
float t = s - 2;
ends with (t != 0) (At least if you use an dumb enough compiler)...
I had looooong coffee chats with my best pal talking about Irrational numbers and the diference between other numbers. Well, both of us agree in this different point of view:
Irrational numbers are relations, as functions, in a way, what way? Well, think about "if you want a perfect circle, give me a perfect pi", but circles are diferent to the other figures (4 sides, 5, 6... 100, 200) but... How many more sides do you have, more like a circle it look like. If you followed me so far, connecting all this ideas here is the pi formula:
So, pi is a function, but one that never ends! because of the ∞ parameter, but I like to think that you can have "instance" of pi, if you change the ∞ parameter for a very big Int, you will have a very big pi instance.
Same with e, give me a huge parameter, I will give you a huge e.
Putting all the ideas together:
As we have memory limitations, the language and libs provide to us huge instance of irrational numbers, in this case, pi and e, as final result, you will have long aproach to get 0, like the examples provided by #Chris Jester-Young
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
In a language that doesn't have a native representation, it is usually added using OOP to create a Complex class to represent i and j, with operator overloading to properly deal with operations involving other Complex numbers and or other number primitives native to the language.
Eg: Complex.java, C++ < complex >
Numerical Analysis teaches us that you can't rely on the precise value of small differences between large numbers.
This doesn't just affect the equation in question here, but can bring instability to everything from solving a near-singular set of simultaneous equations, through finding the zeros of polynomials, to evaluating log(~1) or exp(~0) (I have even seen special functions for evaluating log(x+1) and (exp(x)-1) to get round this).
I would encourage you not to think in terms of zeroing the difference -- you can't -- but rather in doing the associated calculations in such a way as to ensure the minimum error.
I'm sorry, it's 43 years since I had this drummed into me at uni, and even if I could remember the references, I'm sure there's better stuff around now. I suggest this as a starting point.
If that sounds a bit patronising, I apologise. My "Numerical Analysis 101" was part of my Chemistry course, as there wasn't much CS in those days. I don't really have a feel for the place/importance numerical analysis has in a modern CS course.
It's a limitation of our current floating point computational architectures. Floating point arithmetic is only an approximation of numeric poles like e or pi (or anything beyond the precision your bits allow). I really enjoy these numbers because they defy classification, and appear to have greater entropy(?) than even primes, which are a canonical series. A ratio defy's numerical representation, sometimes simple things like that can blow a person's mind (I love it).
Luckily entire languages and libraries can be dedicated to precision trigonometric functions by using notational concepts (similar to those described by Lasse V. Karlsen ).
Consider a library/language that describes concepts like e and pi in a form that a machine can understand. Does a machine have any notion of what a perfect circle is? Probably not, but we can create an object - circle that satisfies all the known features we attribute to it (constant radius, relationship of radius to circumference is 2*pi*r = C). An object like pi is only described by the aforementioned ratio. r & C can be numeric objects described by whatever precision you want to give them. e can be defined "as the e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is exactly 1" from wikipedia.
Fun question.

Resources