Sometimes when coding in ZX Spectrum Basic I need to evaluate logical expressions that are formed by two operands and a logical xor like this:
IF (left operand) xor (right operand) THEN
Since ZX Basic does only know NOT, OR and AND I have to resort to some sort of fancy calculation which includes multiple uses of left/right operands. This is awkward since it consumes time and memory, both sparse if you're working on an 8-bit machine. I wonder if there's a neat trick to mimic the xor operator.
To test the outcome I provide a small code sample:
5 DEF FN x(a,b)=(a ??? b) : REM the xor formula, change here
10 FOR a=-1 TO 1 : REM left operand
20 FOR b=-1 TO 1 : REM right operand
30 LET r=FN x(a,b) : REM compute xor
40 PRINT "a:";a;" b:";b;" => ";r
50 NEXT b
60 NEXT a
Can you help me find a performant solution? So far I tried DEF FN x(a,b)=(a AND NOT b) OR (b AND NOT a) but it's somewhat clumsy.
Edit:
If you want to test your idea I suggest the BasinC v1.69 ZX emulator (Windows only).
As #Jeff pointed out most Basics, such as ZX one's, do consider zero values as false and non-zero ones as true.
I have adapted the sample to test with a variety of non-zero values.
The logical xor is semantically equivalent to not equal.
IF (left operand) <> (right operand) THEN
should work.
Edit: In the case of integer operands you can use
IF ((left operand) <> 0) <> ((right operand) <> 0) THEN
DEF FN x(a,b)=((NOT a) <> (NOT b))
Using NOT as coercion to a boolean value.
EDIT Previously had each side with NOT NOT which is unnecessary for establishing difference between the two, as one will still coerce!
EDIT 2 Added parens to sort out precedence issue.
Considering very interesting and fun this question and the answers in here, I would like to share the results of some performance tests (performed on an emulator):
elapsed times are in seconds , less is best.
the x1 test is only to see if the expression meets the requirements and includes the print out of results, the x256 repeat the same test 256times without printing any output; the without FN tests are the same but without factoring the expression in an FN statement.
I share also the code and test suite on github: https://github.com/rondinif/XOR-in-ZX-Spectrum-basic for the benefit of all retro computing fanatics (..like me) and share our opinions
Keep in mind, value are integer:
I think mathematical operation could be fun : (A-B)*(A-B) should work
It should be less time consuming based on simple operation.
Or with ABS : ABS (A-B)
Related
I try to write
testFunc = function(x){x^0.3 * (1-x)^0.7}
but when I try
testFunc(2)
R returns NaN result (for any x>1). How can I solve this problem?
If you try to raise a negative floating-point value to a fractional exponent, you'll always get NaN. This is not necessarily the mathematically correct answer - for example, we know that the cube root of -8 (-8^(1/3)) "should" be -2 ((-2)^3 == -8). From ?"^":
Users are sometimes surprised by the value returned, for example
why ‘(-8)^(1/3)’ is ‘NaN’. For double inputs, R makes use of IEC
60559 arithmetic on all platforms, together with the C system
function ‘pow’ for the ‘^’ operator. The relevant standards
define the result in many corner cases. In particular, the result
in the example above is mandated by the C99 standard. On many
Unix-alike systems the command ‘man pow’ gives details of the
values in a large number of corner cases.
If you really want to raise negative values to fractional powers, you could use as.complex():
as.complex(-1)^0.7
[1] -0.5877853+0.809017i
Your function would be
function(x){x^0.3 * as.complex(1-x)^0.7}
but you might need to rethink the mathematical foundations of whatever you're trying to do ...
I need to find the next power of two smaller than a given number in Julia.
i.e. smallerpoweroftwo(15) should return 8 but smallerpoweroftwo(17) should return 16
I have this so far but searching through the string of bits seems a bit hacky to me. Maybe its not ... Any ideas?
function smallerpoweroftwo(n::Int)
2^(length(bits(n)) - search(bits(n), '1'))
end
Thanks!
Edit:
I was mainly thinking is there a more elegant way to do this just using bitwise arithmetic. Or is there a bit length function somewhere like in some other languages?
Julia's standard library has prevpow2 and nextpow2 functions:
help?> prevpow2
search: prevpow2 prevpow prevprod
prevpow2(n)
The largest power of two not greater than n. Returns 0 for n==0, and returns -prevpow2(-n) for negative
arguments.
help?> nextpow2
search: nextpow2 nextpow nextprod
nextpow2(n)
The smallest power of two not less than n. Returns 0 for n==0, and returns -nextpow2(-n) for negative
arguments.
The prevpow2 function should do what you want.
How about this?1
2^floor(Int, log(2,n-1))
1 added exponent to solution after comment by jverzani.
i am trying to make a 100 x 100 tridiagonal matrix with 2's going down the diagonal and -1's surrounding the 2's. i can make a tridiagonal matrix with only 1's in the three diagonals and preform matrix addition to get what i want, but i want to know if there is a way to customize the three diagonals to what ever you want. maplehelp doesn't list anything useful.
The Matrix function in the LinearAlgebra package can be called with a parameter (init) that is a function that can assign a value to each entry of the matrix depending on its position.
This would work:
f := (i, j) -> if i = j then 2 elif abs(i - j) = 1 then -1 else 0; end if;
Matrix(100, f);
LinearAlgebra[BandMatrix] works too (and will be WAY faster), especially if you use storage=band[1]. You should probably use shape=symmetric as well.
The answers involving an initializer function f will do O(n^2) work for square nxn Matrix. Ideally, this task should be O(n), since there are just less than 3*n entries to be filled.
Suppose also that you want a resulting Matrix without any special (eg. band) storage or indexing function (so that you can later write to any part of it arbitrarily). And suppose also that you don't want to get around such an issue by wrapping the band structure Matrix with another generic Matrix() call which would double the temp memory used and produce collectible garbage.
Here are two ways to do it (without applying f to each entry in an O(n^2) manner, or using a separate do-loop). The first one involves creation of the three bands as temps (which is garbage to be collected, but at least not n^2 size of it).
M:=Matrix(100,[[-1$99],[2$100],[-1$99]],scan=band[1,1]);
This second way uses a routine which walks M and populates it with just the three scalar values (hence not needing the 3 band lists explicitly).
M:=Matrix(100):
ArrayTools:-Fill(100,2,M,0,100+1);
ArrayTools:-Fill(99,-1,M,1,100+1);
ArrayTools:-Fill(99,-1,M,100,100+1);
Note that ArrayTools:-Fill is a compiled external routine, and so in principal might well be faster than an interpreted Maple language (proper) method. It would be especially fast for a Matrix M with a hardware datatype such as 'float[8]'.
By the way, the reason that the arrow procedure above failed with error "invalid arrow procedure" is likely that it was entered in 2D Math mode. The 2D Math parser of Maple 13 does not understand the if...then...end syntax as the body of an arrow operator. Alternatives (apart from writing f as a proc like someone else answered) is to enter f (unedited) in 1D Maple notation mode, or to edit f to use the operator form of if. Perhaps the operator form of if here requires a nested if to handle the elif. For example,
f := (i,j) -> `if`(i=j,2,`if`(abs(i-j)=1,-1,0));
Matrix(100,f);
jmbr's proposed solutions can be adapted to work:
f := proc(i, j)
if i = j then 2
elif abs(i - j) = 1 then -1
else 0
end if
end proc;
Matrix(100, f);
Also, I understand your comment as saying you later need to destroy the band matrix nature, which prevents you from using BandMatrix - is that right? The easiest solution to that is to wrap the BandMatrix call in a regular Matrix call, which will give you a Matrix you can change however you'd like:
Matrix(LinearAlgebra:-BandMatrix([1,2,1], 1, 100));
I have a computer program that reads in an array of chars that operands and operators written in postfix notation. The program then scans through the array works out the result by using a stack as shown :
get next char in array until there are no more
if char is operand
push operand into stack
if char is operator
a = pop from stack
b = pop from stack
perform operation using a and b as arguments
push result
result = pop from stack
How do I prove by induction that this program correctly evaluates any postfix expression? (taken from exercise 4.16 Algorithms in Java (Sedgewick 2003))
I'm not sure which expressions you need to prove the algorithm against. But if they look like typical RPN expressions, you'll need to establish something like the following:
1) algoritm works for 2 operands (and one operator)
and
algorithm works for 3 operands (and 2 operators)
==> that would be your base case
2) if algorithm works for n operands (and n-1 operators)
then it would have to work for n+1 operands.
==> that would be the inductive part of the proof
Good luck ;-)
Take heart concerning mathematical proofs, and also their sometimes confusing names. In the case of an inductive proof one is still expected to "figure out" something (some fact or some rule), sometimes by deductive logic, but then these facts and rules put together constitute an broader truth, buy induction; That is: because the base case is established as true and because one proved that if X was true for an "n" case then X would also be true for an "n+1" case, then we don't need to try every case, which could be a big number or even infinite)
Back on the stack-based expression evaluator... One final hint (in addtion to Captain Segfault's excellent explanation you're gonna feel over informed...).
The RPN expressions are such that:
- they have one fewer operator than operand
- they never provide an operator when the stack has fewer than 2 operands
in it (if they didn;t this would be the equivalent of an unbalanced
parenthesis situation in a plain expression, i.e. a invalid expression).
Assuming that the expression is valid (and hence doesn't provide too many operators too soon), the order in which the operand/operators are fed into the algorithm do not matter; they always leave the system in a stable situtation:
- either with one extra operand on the stack (but the knowledge that one extra operand will eventually come) or
- with one fewer operand on the stack (but the knowledge that the number of operands still to come is also one less).
So the order doesn't matter.
You know what induction is? Do you generally see how the algorithm works? (even if you can't prove it yet?)
Your induction hypothesis should say that, after processing the N'th character, the stack is "correct". A "correct" stack for a full RPN expression has just one element (the answer). For a partial RPN expression the stack has several elements.
Your proof is then to think of this algorithm (minus the result = pop from stack line) as a parser that turns partial RPN expressions into stacks, and prove that it turns them into the correct stacks.
It might help to look at your definition of an RPN expression and work backwards from it.
Is it impossible to know if two functions are equivalent? For example, a compiler writer wants to determine if two functions that the developer has written perform the same operation, what methods can he use to figure that one out? Or can what can we do to find out that two TMs are identical? Is there a way to normalize the machines?
Edit: If the general case is undecidable, how much information do you need to have before you can correctly say that two functions are equivalent?
Given an arbitrary function, f, we define a function f' which returns 1 on input n if f halts on input n. Now, for some number x we define a function g which, on input n, returns 1 if n = x, and otherwise calls f'(n).
If functional equivalence were decidable, then deciding whether g is identical to f' decides whether f halts on input x. That would solve the Halting problem. Related to this discussion is Rice's theorem.
Conclusion: functional equivalence is undecidable.
There is some discussion going on below about the validity of this proof. So let me elaborate on what the proof does, and give some example code in Python.
The proof creates a function f' which on input n starts to compute f(n). When this computation finishes, f' returns 1. Thus, f'(n) = 1 iff f halts on input n, and f' doesn't halt on n iff f doesn't. Python:
def create_f_prime(f):
def f_prime(n):
f(n)
return 1
return f_prime
Then we create a function g which takes n as input, and compares it to some value x. If n = x, then g(n) = g(x) = 1, else g(n) = f'(n). Python:
def create_g(f_prime, x):
def g(n):
return 1 if n == x else f_prime(n)
return g
Now the trick is, that for all n != x we have that g(n) = f'(n). Furthermore, we know that g(x) = 1. So, if g = f', then f'(x) = 1 and hence f(x) halts. Likewise, if g != f' then necessarily f'(x) != 1, which means that f(x) does not halt. So, deciding whether g = f' is equivalent to deciding whether f halts on input x. Using a slightly different notation for the above two functions, we can summarise all this as follows:
def halts(f, x):
def f_prime(n): f(n); return 1
def g(n): return 1 if n == x else f_prime(n)
return equiv(f_prime, g) # If only equiv would actually exist...
I'll also toss in an illustration of the proof in Haskell (GHC performs some loop detection, and I'm not really sure whether the use of seq is fool proof in this case, but anyway):
-- Tells whether two functions f and g are equivalent.
equiv :: (Integer -> Integer) -> (Integer -> Integer) -> Bool
equiv f g = undefined -- If only this could be implemented :)
-- Tells whether f halts on input x
halts :: (Integer -> Integer) -> Integer -> Bool
halts f x = equiv f' g
where
f' n = f n `seq` 1
g n = if n == x then 1 else f' n
Yes, it is undecidable. This is a form of the halting problem.
Note that I mean that it's undecidable for the general case. Just as you can determine halting for sufficiently simple programs, you can determine equivalency for sufficiently simple functions, and it's not inconceivable that this could be of some use for an application. But you cannot make a general method for determining equivalency of any two possible functions.
The general case is undecidable by Rice's Theorem, as others have already said (Rice's Theorem essentially says that any nontrivial property of a Turing-complete formalism is undecidable).
There are special cases where equivalence is decidable, the best-known example is probably equivalence of finite state automata. If I remember correctly equivalence of pushdown automata is already undecidable by reduction to Post's Correspondence Problem.
To prove that two given functions are equivalent you would require as input a proof of the equivalence in some formalism, which you can then check for correctness. The essential parts of this proof are the loop invariants, as these cannot be derived automatically.
In the general case it's undecidable whether two turing machines have always the same output for the identical input. Since you can't even decide whether a tm will halt on the input, I don't see how it should be possible to decide whether both halt AND output the same result...
It depends on what you mean by "function."
If the functions you are talking about are guaranteed to terminate -- for example, because they are written in a language in which all functions terminate -- and operate over finite domains, it's "easy" (although it might still take a very, very long time): two functions are equivalent if and only if they have the same value at every point in their shared domain.
This is called "extensional" equivalence to distinguish it from syntactic or "intensional" equivalence. Two functions are extensionally equivalent if they are intensionally equivalent, but the converse does not hold.
(All the other people above noting that it is undecidable in the general case are quite correct, of course, this is a fairly uncommon -- and usually uninteresting in practice -- special case.)
Note that the halting problem is decidable for linear bounded automata. Real computers are always bounded, and programs for them will always loop back to a previous configuration after sufficiently many steps. If you are using an unbounded (imaginary) computer to keep track of the configurations, you can detect that looping and take it into account.
You could check in your compiler to see if they are "exactly" identical, sure, but determining if they return identical values would be difficult and time consuming. You would have to basically call that method and perform its routine over an infinite number of possible calls and compare the value with that from the other routine.
Even if you could do the above, you would have to account for what global values change within the function, what objects are destroyed / changed in the function that do not affect the outcome.
You can really only compare the compiled code. So compile the compiled code to refactor?
Imagine the run time on trying to compile the code with "that" compiler. You could spend a LOT of time on here answering questions saying: "busy compiling..." :)
I think if you allow side effects, you can show that the problem can be morphed into the Post correspondence problem so you can't, in general, show if two functions are even capable of having the same side effects.
Is it impossible to know if two functions are equivalent?
No. It is possible to know that two functions are equivalent. If you have f(x), you know f(x) is equivalent to f(x).
If the question is "it is possible to determine if f(x) and g(x) are equivalent with f and g being any function and for all functions g and f", then the answer is no.
However, if the question is "can a compiler determine that if f(x) and g(x) are equivalent that they are equivalent?", then the answer is yes if they are equivalent in both output and side effects and order of side effects. In other words, if one is a transformation of the other that preserves behavior, then a compiler of sufficient complexity should be able to detect it. It also means that the compiler can transform a function f into a more optimal and equivalent function g given a particular definition of equivalent. It gets even more fun if f includes undefined behavior, because then g can also include undefined (but different) behavior!