Is it possible to design an automaton that accepts irrational number? - math

Given a rational number, would it be possible to know whether the root or some other power of the number is an irrational number? Can an automaton be designed for such a purpose?

An irrational number is an infinite string, and if you want an automaton that can read it, it will need to continue reading infinitely.
You cannot build a decider (a machine that always halts with output true or false), but you can build an acceptor (a machine that halts with false, but continues forever for true), which is what I believe you're asking.
Consider a machine that accepts the irrational number of the form
0.10110111011110111110...
Where the lengths of runs of 1s is always growing between 0s. It's relatively easy to define a Turing machine that can accept this number.
(For the implementation for such a machine, I'd suggest The Annotated Turing, which also has an implementation for a machine that accepts √2.)

Related

What happens if you divide by Zero on a computer?

What happens if you divide by Zero on a Computer?
In any given programming languange (I worked with, at least) this raises an error.
But why? Is it built in the language, that this is prohibited? Or will it compile, and the hardware will figure out that an error must be returned?
I guess handling this by the language can only be done, if it is hard code, e.g. there is a line like double z = 5.0/0.0; If it is a function call, and the devisor is given from outside, the language could not even know that this is a division by zero (at least a compile time).
double divideByZero(double divisor){
return 5.0/divisor;
}
where divisor is called with 0.0.
Update:
According to the comments/answers it makes a difference whether you divide by int 0 or double 0.0.
I was not aware of that. This is interesting in itself and I'm interested in both cases.
Also one answer is, that the CPU throws an error. Now, how is this done? Also in software (doesn't make sense on a CPU), or are there some circuits which recognize this? I guess this happens on the Arithmetic Logic Unit (ALU).
When an integer is divided by 0 in the CPU, this causes an interrupt.¹ A programming language implementation can then handle that interrupt by throwing an exception or employing whichever other error-handling mechanisms the language has.
When a floating point number is divided by 0, the result is infinity, NaN or negative infinity (which are special floating point values). That's mandated by the IEEE floating point standard, which any modern CPU will adhere to. Programming languages generally do as well. If a programming language wanted to handle it as an error instead, it could just check for NaN or infinite results after every floating point operation and cause an error in that case. But, as I said, that's generally not done.
¹ On x86 at least. But I imagine it's the same on most other architectures as well.

OpenMDAO 1.x relevance reduction

I have a component in OpenMDAO without outputs that serves to provide inputs to the rest of the group. apply_linear in that component is being called despite the fact that the output of it is not connected. Shouldn't the relevance reduction algorithm in OpenMDAO 1.x figure out that apply_linear for this method never needs to be called?
As it turns out, relevance reduction on a per-variable basis isn't turned on by default. You can turn it on with:
prob.root.ln_solver = LinearGaussSeidel()
prob.root.ln_solver.options['single_voi_relevance_reduction'] = True
This options is set to False by default because it does use more memory by allocating separate vectors for each quantity of interest (though each vector is smaller because it only contains relevant variables, but the total size may be larger.) Also, relevance-reduction is only applicable when using Linear Gauss Seidel as the top linear solver.
My reputation isn't high enough yet to leave comments, so I'm just adding another answer instead. I just wanted to mention that if you're not running under MPI, activating single_voi_relevance_reduction is essentially free. The real increase in memory use isn't due to the vectors themselves, but instead it's due to the index arrays that we store in order to transfer the data from source arrays to target arrays. We're forced to use index arrays under MPI, because PETSc requires it, but when we're not using MPI we use python slice objects to do our data transfer. Slice objects require very little memory.

Is this language decidable?

I'm struggling with whether or not this is decidable:
A = {x is an element of the set of Natural Numbers | for every y greater than x, 2y is the sum of two primes}
I'm inclined to think that this is decidable given the fact that when fed into a Turing Machine, it will never reach an accept state and loop for infinity unless it rejects. However, I also do know that for a language to be decidable, there must only exist an algorithm to decide it; we don't necessarily have to know how it's done. With this, part of me think that it is decidable? Does anyone know how to prove either?
This language is decidable, though the proof is a bit evil.
For starters, let's think about the properties of this language. Clearly, if n is a natural number contained in the language, then every number greater than n is also in the language. Thus there are three possible forms this language can take:
This language contains all natural numbers, or
This language contains no natural numbers, or
This language contains all natural numbers greater than some natural number n.
Languages (1) and (2) are, respectively, {0, 1}* and the empty language, both of which are decidable (so there are TMs that always halt that accept those languages). Every language of form (3) is also decidable, because for any n we can easily write a TM with n hardcoded into it that simply checks whether the input is at least n. Consequently, no matter which case is true (either 1, 2, or 3), there exists some TM that always halts whose language is the language you've provided, so your language is decidable.
But that said, this proof is nonconstructive. We can show that the language has to be decidable, but we can't actually find the TM that always halts that accepts it! In fact, no one knows which TM it is, because Goldbach's Conjecture (whether every even number greater than two is the sum of two primes) is an open problem in mathematics.
Hope this helps!

Is there a programming language where every function is essentially run as a separate actor?

Is there a programming language where you don't have to define actors yourself - every function is just ran as a separate actor (which can mean a separate thread if there are free cores available) by default?
For example it means that if I write something as simple as
v = fA(x) + fB(y)
then fA and fB could be calculated simultaneously before the sum of their results was assigned to v.
I don't think there is anything this extreme, since the context switching and comunication overhead would be too big.
The closest I can think of to what you are asking is data-parallel programing, where the program is mostly written in the same style as a sequential version but parts of it are ran in parallel where possible.
Examples are loop vectorization in Fortran and "par" magic in Haskell.
Haskell's par combinator lets you evaluate expressions concurrently (which can mean in separate threads if there are free cores available). All you have to do is:
x `par` y
Which will evaluate x and y concurrently, and return the value of y. Note that x and y can be programs of arbitrary complexity.
Joule is a pure asynchronous message passing language:
http://en.wikipedia.org/wiki/Joule_%28programming_language%29
http://www.erights.org/history/joule/MANUAL.BK5.pdf
ActorScript is a pure Actor message-passing language, but appears to only exist as a specification:
http://arxiv.org/abs/1008.2748

How do programming languages handle huge number arithmetic

For a computer working with a 64 bit processor, the largest number that it can handle would be 264 = 18,446,744,073,709,551,616. How does programming languages, say Java or be it C, C++ handle arithmetic of numbers higher than this value. Any register cannot hold it as a single piece. How was this issue tackled?
There are lots of specialized techniques for doing calculations on numbers larger than the register size. Some of them are outlined in this wikipedia article on arbitrary precision arithmetic
Low level languages, like C and C++, leave large number calculations to the library of your choice. One notable one is the GNU Multi-Precision library. High level languages like Python, and others, integrate this into the core of the language, so normal numbers and very large numbers are identical to the programmer.
You assume the wrong thing. The biggest number it can handle in a single register is a 64-bits number. However, with some smart programming techniques, you could just combined a few dozens of those 64-bits numbers in a row to generate a huge 6400 bit number and use that to do more calculations. It's just not as fast as having the number fit in one register.
Even the old 8 and 16 bits processors used this trick, where they would just let the number overflow to other registers. It makes the math more complex but it doesn't put an end to the possibilities.
However, such high-precision math is extremely unusual. Even if you want to calculate the whole national debt of the USA and store the outcome in Zimbabwean Dollars, a 64-bits integer would still be big enough, I think. It's definitely big enough to contain the amount of my savings account, though.
Programming languages that handle truly massive numbers use custom number primitives that go beyond normal operations optimized for 32, 64, or 128 bit CPUs. These numbers are especially useful in computer security and mathematical research.
The GNU Multiple Precision Library is probably the most complete example of these approaches.
You can handle larger numbers by using arrays. Try this out in your web browser. Type the following code in the JavaScript console of your web browser:
The point at which JavaScript fails
console.log(9999999999999998 + 1)
// expected 9999999999999999
// actual 10000000000000000 oops!
JavaScript does not handle plain integers above 9999999999999998. But writing your own number primitive is to make this calculation work is simple enough. Here is an example using a custom number adder class in JavaScript.
Passing the test using a custom number class
// Require a custom number primative class
const {Num} = require('./bases')
// Create a massive number that JavaScript will not add to (correctly)
const num = new Num(9999999999999998, 10)
// Add to the massive number
num.add(1)
// The result is correct (where plain JavaScript Math would fail)
console.log(num.val) // 9999999999999999
How it Works
You can look in the code at class Num { ... } to see details of what is happening; but here is a basic outline of the logic in use:
Classes:
The Num class contains an array of single Digit classes.
The Digit class contains the value of a single digit, and the logic to handle the Carry flag
Steps:
The chosen number is turned into a string
Each digit is turned into a Digit class and stored in the Num class as an array of digits
When the Num is incremented, it gets carried to the first Digit in the array (the right-most number)
If the Digit value plus the Carry flag are equal to the Base, then the next Digit to the left is called to be incremented, and the current number is reset to 0
... Repeat all the way to the left-most digit of the array
Logistically it is very similar to what is happening at the machine level, but here it is unbounded. You can read more about about how digits are
carried here; this can be applied to numbers of any base.
Ada actually supports this natively, but only for its typeless constants ("named numbers"). For actual variables, you need to go find an arbitrary-length package. See Arbitrary length integer in Ada
More-or-less the same way that you do. In school, you memorized single-digit addition, multiplication, subtraction, and division. Then, you learned how to do multiple-digit problems as a sequence of single-digit problems.
If you wanted to, you could multiply two twenty-digit numbers together using nothing more than knowledge of a simple algorithm, and the single-digit times tables.
In general, the language itself doesn't handle high-precision, high-accuracy large number arithmetic. It's far more likely that a library is written that uses alternate numerical methods to perform the desired operations.
For example (I'm just making this up right now), such a library might emulate the actual techniques that you might use to perform that large number arithmetic by hand. Such libraries are generally much slower than using the built-in arithmetic, but occasionally the additional precision and accuracy is called for.
As a thought experiment, imagine the numbers stored as a string. With functions to add, multiply, etc these arbitrarily long numbers.
In reality these numbers are probably stored in a more space efficient manner.
Think of one machine-size number as a digit and apply the algorithm for multi-digit multiplication from primary school. Then you don't need to keep the whole numbers in registers, just the digits as they are worked on.
Most languages store them as array of integers. If you add/subtract two to of these big numbers the library adds/subtracts all integer elements in the array separately and handles the carries/borrows.
It's like manual addition/subtraction in school because this is how it works internally.
Some languages use real text strings instead of integer arrays which is less efficient but simpler to transform into text representation.

Resources