Sometimes a device has commands to be sent to another device. In simple cases, I often use strings of digits and letters like
"123X456YG"
with the meaning
set the value of parameter X to 123, 456 to Y, and then Go.
The interpretation algorithm is quite simple : process char by char, "push" the digit to build a number (n = 10*n + ch-'0'), execute action (and reset number) when it is a letter.
It very convenient when there are not too many different actions (and you can remember the letters), of course you may use SCPI for more complicated things.
Is there a name for this (rather obvious) way to to?
This is an example of postfix notation. It's a notation that fairly naturally falls out of anything that uses a stack.
Examples include:
Reverse Polish Notation -- common on programmable calculators from the late 1960s onwards.
... in which 4 3 - 5 + results in 6 being left at the head of the stack, because the stack builds up like this:
Stack Unconsumed input
1 [ ] 4 3 - 5 +
2 [ 4 ] 3 - 5 +
3 [ 4 3 ] - 5 +
4 [ 1 ] 5 +
5 [ 1 5 ] +
6 [ 6 ]
At steps 4 and 6, the operator is executed by popping elements off the stack, then pushing the result onto the stack.
Many stack-based programming languages -- including Forth and Postscript. Postfix syntax is also fairly common in the world of esoteric programming languages.
If you're curious, I thoroughly recommend having a go at hand-writing Postscript. It's a fully-fledged programming language; for example, you can write a recursive Postscript program in a few bytes, to draw a fractal curve. Test it in Ghostscript. PDF is closely related to Postscript.
The language you have created differs from these in your approach to tokenisation, but at its core -- putting the operand(s) before the operator -- it's a postfix language.
You are unlikely to get a patent granted. :)
Related
In a/b*(c+(d-e)) Infix notation (d-e) will be evaluated first but if we convert it into Post-fix ab/cde-+* then ab/ will be evaluated first.
why ab/ is evaluating first in post-fix instead of d-e?
Multiplication and division are left-associative, meaning they are evaluated from left to right. Since a and b are terminals (no further evaluation needs to be done), ab/ is ready to be evaluated. Once we get to the last term, c+(d-e), we need to delve deeper and only then do we evaluate de-.
When you talk about "precedence" (a concept which is designed to disambiguate infix notation hence not really applicable to postfix notation) you really seem to mean "order of operations", which is a broader notion.
One thing to realize is that the order of operations taught in elementary school (often with the pneumonic PEMDAS) isn't necessarily the order of operations that a computer will use when evaluating an expression like a/b*(c+(d-e)). Using PEMDAS, you would first calculate d-e then c+(d-e) etc., which is a different order than that implicit in ab/cde-+*. But, it is interesting to note that many programming languages will in fact evaluate a/b*(c+(d-e)) by using the order of ab/cde-+* rather than by a naive implementation of PEMDAS. As an example, if in Python you import the module dis and evaluate dis.dis("a/b*(c+(d-e))") to disassemble a/b*(c+(d-e)) into Python byte code you get:
0 LOAD_NAME 0 (a)
2 LOAD_NAME 1 (b)
4 BINARY_TRUE_DIVIDE
6 LOAD_NAME 2 (c)
8 LOAD_NAME 3 (d)
10 LOAD_NAME 4 (e)
12 BINARY_SUBTRACT
14 BINARY_ADD
16 BINARY_MULTIPLY
18 RETURN_VALUE
which is easily seen to be exactly the same order of operations as ab/cde-+*. In fact, this postfix notation can be thought of as shorthand for the stack-based computation that Python uses when it evaluates a/b*(c+(d-e))
Both evaluation orders execute the same operations with the same arguments and produce the same result, so in that sense the difference doesn't matter.
There are differences that matter, though. The grade-school arithmetic order is never used when infix expressions are evaluated in practice, because:
It requires more intermediate results to be stored. (a+b)(c+d)(e+f)*(g+h) requires 4 intermediate sums to be stored in grade-school order, but only 2 in the usual order.
It's actually more complicated to implement the grade-school order in most cases; and
When sub-expressions have side-effects, the order becomes important and the usual order is easier for programmers to reason about.
my problem concerns operation with complex number in Maple 18. The issue is the following
i define this complex:
c:=a+i*b;
then i compute the square: sort(evalc(c^2))
and the output is:
a^2+2*i*ab-b^2;
So, how can i obtain an output like the following?
a^2-b^2b^2 +a*i*ab;
In other word i want an output where the real part precede the complex part.
i have tried with sort command but it was not enough ... probably exixts some command to format the output in complex manner but i dont find that ...
thank you in advance :)
In Maple 18 you could use the new InertForm package to get more control over the formatting.
The exact look will depend on the interface. In the commandline (TTY) interface both uses of Display below look the same, but both display with extra round brackets around the real part. In the Standard Java GUI the first one has a grey + to denote the inert %+.
restart:
c:=a+b*I:
expr:=c^2:
U := `%+`(evalc(Re(expr)),evalc(I*Im(expr))):
with(InertForm):
Display(U);
2 2
(a - b ) + 2 I a b
Display(U, inert=false);
2 2
(a - b ) + 2 I a b
Value(U);
2 2
-b + 2 I a b + a
This is a new way of handling such issues. In older Maple the real and complex parts could have been kept separate in the display by being each wrapped with the ``() operator. And then the actual value could be re-obtained by applying the expand command to strip that off. That's not so nice, displaying with extra brackets even in the GUI.
So I was playing with the newly standardized unordered_map from the STL. The code I have is kinda like this, I just create an unordered_map, fill it up, and print it out:
unordered_map<int,string> m1;
m1[5]="lamb";
m1[2]="had";
m1[3]="a";
m1[1]="mary";
m1[4]="little";
m1[7]="fleece";
m1[6]="whose";
m1[10]="fleecey";
m1[8]="was";
m1[9]="all";
for(unordered_map<int,string>::const_iterator i = m1.begin(); i != m1.end(); ++i)
cout<<i->first<<" "<<i->second<<endl;
However, the output I get is ordered thusly:
1 mary
2 had
3 a
4 little
5 lamb
6 whose
7 fleece
8 was
9 all
10 fleecey
But I don't want to pay the price to have my map ordered! That is why I am using an unordered_map... What is going on here?
additional note: I am using gcc version 4.3.4 20090804 (release) 1 (GCC) and am compiling like this g++ -std=c++0X maptest.cpp
"Unordered" doesn't mean it will store the items randomly or maintain the order you put them in the map. It just means you can't rely on any particular ordering. You don't pay a price for ordering, quite the contrary - the implementation isn't explicitly ordering the items, it's a hashmap and stores its elements in whatever way it pleases, which usually is a pretty performant way. It just so happens that the the hashing algorithm and other internal workings of the map, when using exactly these keys and this number and order of operations on the map, end up storing the items in a order that looks ordered. Strings, for example, may lead to an apparently randomized layout.
On a side note, this is probably caused by the map using a hash that maps (at least some) integers to itself and using the lower bits (as many as the map size mandates) of the hash the to determine the index for the underlying array (for instance, CPython does this - with some very clever additions to handle collisions relatively simply and efficiently; for the same reason the hashes of CPython strings and tuples are very predictable).
For your amusement, here's the output from libc++, which also has an identity function for std::hash<int>.
9 all
8 was
10 fleecey
6 whose
7 fleece
4 little
1 mary
3 a
2 had
5 lamb
There are several ways to implement a hash container, each with its own tradeoffs.
I have some basic doubts, but every time I sit to try my hands at interview questions, these questions and my doubts pop up.
Say A = 5, B = -2. Assuming that A and B are 4-bytes, how does the CPU do the A + B addition?
I understand that A will have sign bit (MSB) as 0 to signify a positive value
and B will have sign bit as 1 to signify a negative integer.
Now when in C++ program, I want to print A + B, does the addition module of the ALU (Arithmetic Logic Unit) first check for sign bit and then decide to do subtraction and then follow the procedure of subtraction. How subtraction is done will be my next question.
A = 5
B = 2
I want to do A - B. The computer will take 2's complement of B and add A + 2's complement of B and return this (after discarding the extra bit on left)?
A = 2
B = 5
to do A - B. How does the computer do in this case?
I understand that any if-then etc kind of conditional logic all will be done in hardware inside ALU. computing 2s complement etc,discarding extra bit all will be done in hardware inside ALU. How does this component of ALU look like?
The whole reason we use 2's-complement is that addition is the same whether the numbers are positive or negative - there are no special cases to consider, like there are with 1's-complement or signed-magnitude representations.
So to find A-B, we can just negate B and add; that is, we find A + (-B), and because we're using 2's-complement, we don't worry if (-B) is positive or negative, because the addition-algorithm works the same either way.
Think in terms of two or three bits and then understand that these things scale up to 32 or 64 or howevermany bits.
First, lets start with decimal
99
+22
===
In order to do this we are going to have some "Carry the one's" going on.
11
99
+22
===
121
9 plus 2 is 1 carry the one, 1 plus 9 plus 2 is 2 carry the one...
The point being though to notice that to add two numbers I actually needed three rows, for at least some of it I might need to be able to add three numbers. Same thing with an adder in an alu, each column or bit lane, single bit adder, needs to be able to add two inputs plus a carry in bit, and the output is a one bit result and a one bit carry.
Since you used 5 and 2 lets do some 4 bit binary math
0101
+0010
=====
0111
We didnt need a carry on this one, but you can see the math worked, 5 + 2 = 7.
And if we want to add 5 and -2
11
0101
+1110
=====
0011
And the answer is 3 as expected, not really surprising but we had a carry out. And since this was an add with a minus number in twos complement it all worked, there was no if sign bit then, twos complement makes it so we dont care just feed the adder the two operands.
Now if you want to make a subtle difference, what if you want to subtract 2 from 5, you select the subtract instruction not add. Well we all learned that negating in twos complement means invert and add one. And we saw above that a two input adder really needs a third input for carry in so that it can be cascaded to however wide the adder needs to be. So instead of doing two add operations, invert and add 1 being the first add the real add all we have to do is invert and set the carry in:
Understand that there is no subtract logic, it adds the negative of whatever you feed it.
v this bit is normally zero, for a subtract we set this carry in bit
11 11
0101 five
+1101 ones complement of 2
=====
0011
And what do you know we get the same answer...It doesnt matter what the actual values are for either of the operands. if it is an add operation you put a zero on the carry in bit and feed it to the adder. If it is a subtract operation you invert the second operand and put a one on the carry in and feed it to the same adder. Whatever falls out falls out. If your logic has enough bits to hold the result then it all works, if you do not have enough room then you overflow.
There are two kinds of overflow, unsigned, and signed. Unsigned is simple it is the carry bit. Signed overflow has to do with comparing the carry in bit on the msbit column with the carry out bit for that column. For our math above you see that the carry and carry out of that msbit column is the same, both are a one. And we happen to know by inspection that a 4 bit system has enough room to properly represent the numbers +5, -2, and +3. A 4 bit system can represent the numbers +7 down to -8. So if you were to add 5 and 5 or -6 and -3 you would get a signed overflow.
01 1
0101
+0101
=====
1010
Understand that the SAME addition logic is used for signed and unsigned math, it is up to your code not the logic to virtually define if those bits were considered twos complement signed or unsigned.
With the 5 + 5 case above you see that the carry in on the msbit column is a 1, but the carry out is a 0 that means the V flag, the signed overflow flag, will be set by the logic. At the same time the carry out of that bit which is the C flag the carry flag, will not be set. When thinking unsigned 4 bits can hold the numbers 0 to 15 so 5 + 5 = 10 does not overflow. But when thinking signed 4 bits can hold +7 to -8 and 5 + 5 = 10 is a signed overflow so the V flag is set.
if/when you have an add with carry instruction they take the SAME adder circuit and instead of feeding the carry in a zero it is fed the carry flag. Likewise a subtract with borrow, instead of feeding the carry in a 1 the carry in is either a 1 or 0 based on the state of the carry flag in the status register.
Multiplication is whole other story, binary makes multiplication much easier than when done with decimal math, but you DO have to have different unsigned and signed multiplication instructions. And division is its own separate beast, which is why most instruction sets do not have a divide. Many do not have a multiply because of the number of gates or clocks it burns.
You are a bit wrong in the sign bit part. It's not just a sign bit - every negative number is converted to 2's complement. If you write:
B = -2
The compiler when compiling it to binary will make it:
1111 1111 1111 1111 1111 1111 1111 1110
Now when it wants to add 5, the ALU gets 2 numbers and adds them, a simple addition.
When the ALU gets a command to subtract it is given 2 numbers - it makes a NOT to every bit of the second number and makes a simple addition and adds 1 more (because 2's complement is NOT to every bit +1).
The basic thing here to remember is that 2's complement was selected for exactly the purpose of not having to make 2 separate procedures for 2+3 and for 2+(-3).
does the addition module of ALU (Arithmetic Logic Unit) first check for sign bit and then decide to do subtraction and then follow the procedure of subtraction
No, in one's and two's complement there's no differentiation between adding/subtracting a positive or negative number. The ALU works the same for any combination of positive and negative values
So the ALU is basically doing A + (-B) for A - B, but it doesn't need a separate negation step. Designers use a clever trick to make adders do both add and sub in the same cycle length by adding only a muxer and a NOT gate along with the new input Binvert in order to conditionally invert the second input. Here's a simple ALU example which can do AND/OR/ADD/SUB
Computer Architecture - Full Adder
The real adder is just a box with the plus sign inside ⊞ which adds a with b or ~b and carry in, producing the sum and carry out. It works by realizing that in two's complement -b = ~b + 1, so a - b = a + ~b + 1. That means we just need to set the carry in to 1 (or negate the carry in for borrow in) and invert the second input (i.e. b). This type of ALU can be found in various computer architecture books like
Digital Design and Computer Architecture
Computer Organization and Design MIPS Edition: The Hardware/Software Interface
Computer Organization and Design RISC-V Edition: The Hardware Software Interface
In one's complement -b = ~b so you don't set the carry in when you want to subtract, otherwise the design is the same. However two's complement has another advantage: operations on signed and unsigned values also work the same, so you don't even need to distinguish between signed and unsigned types. For one's complement you'll need to add the carry bit back to the least significant bit if the type is signed
With some simple simple modification to the above ALU they can now do 6 different operations: ADD, SUB, SLT, AND, OR, NOR
CSE 675.02: Introduction to Computer Architecture
Multiple-bit operations are done by concatenating multiple single-bit ALUs above. In reality ALUs are able to do a lot more operations but they're made to save space with the similar principle
In 2's-complement notation: not B = -B -1 or -B = (not B) + 1. It can be checked on a computer or on paper.
So A - B = A + (not B) + 1 which can be performed with:
1 bitwise not
1 increment
1 addition
There's a trick to inefficiently increment and decrement using just nots and negations.
For example if you start with the number 0 in a register and perform:
not, neg, not, neg, not, neg, ... the register will have values:
-1, 1, -2, 2, -3, 3, ...
Or as another 2 formulas:
not(-A) = A - 1
-(not A) = A + 1
History: I read from one of Knuth's algorithm book that first computers used the base of 10. Then, it switched to two's complement here.
Question: Why does the base could not be -2 in at least a monoid?
Examples:
(-2)^1 = -2
(-2)^3 = -8
The problem is that with a negabinary (base -2) system, it's more difficult to understand, and the number of possible positive and negative values are different. To see this latter point, consider a simple 3 bit case.
Here
the first (rightmost) bit represents the decimal 1;
the middle bit represents the decimal -2; and
the third (leftmost) bit represents the decimal 4
So
000 -> 0
001 -> 1
010 -> -2
011 -> -1
100 -> 4
101 -> 5
110 -> 2
111 -> 3
Thus the range of expressable values is -2 to 5, i.e. non-symmetric.
At its heart, digital logic is base two. A digital signal is either on or off. Supporting other bases (as in BCD) means wasted representation space, more engineering, more complex specification, etc.
Editted to add: In addition to the trivial representation of a single binary digit in digital logic, addition is easily realized in hardware, start half adder which is easily realized in Boolean logic (i.e. with transistors):
(No carry) (with carry)
| 0 1 0 1
--+--------------------
0 | 00 01 01 10
1 | 01 10 10 11
(the returned digit is (A xor B) xor C, and the carry is ((A and B) or (C and (A or B))) ) which are then chained together to generate a full register adder.
Which brings us to twos complement: negation is easy, and the addition of mixed positive and negative number follows naturally with no additional hardware. So subtraction comes almost for free.
Few other representations will allow arithmetic to be implemented so cheaply, and I know of none that are easier.
Optimization in storage and optimization in processing time are often at cross purposes with each other; all other things being equal, simplicity usually trumps complexity.
Anyone can propose any storage mechanism for information they wish, but unless there are processors or algorithms that support it, it won't get used.
There are two reasons to choose base 2 over base -2:
First, in a lot of applications you don't need to represent negative numbers. By isolating their representation to a single bit you can either expand the range of representable numbers, or reduce the storage space required when negative numbers aren't needed. In base -2 you need to include the negative values even if you clip the range.
Second, 2s complement hardware is simple to implement. Not only is simple to implement, it is super simple to implement 2s complement hardware that supports both signed and unsigned arithmetic, since they are the same thing. In other words, the binary representation of uint4(8) and sint4(-15) are the same, and the binary representation of uint(7) and sint4(7) are the same, which means you can do the addition without knowing whether or not it is signed, the values all work out either way. That means the HW can totally avoid knowing anything about signs and let it be dealt with as a language convention.
Also, the use of the binary system has a mathematical background. Consider the Information Theory by Claude Shannon . My english skills don't qualify to explain this topic, so better follow the link to wikipedia and enjoy the maths behind all this stuff.
In the end the decision was made because of voltage variance.
With base 2 it is on or off, no in between.
However with base 10 how do you know what each number is?
is .1 volts 1? What about .11? Voltage can vary and is not precise. Which is why an analog signal is not as good as a digital. This is if you pay more for a HDMI cable than $6 it is a waste, it is digital it gets there or not. Audio it does matter because the signal can change.
Please, see an example of the complexity that dmckee pointed out without examples. So you can see an example, the numbers 0-9:
0 = 0
1 = 1
2 = 110
3 = 111
4 = 100
5 = 101
6 = 11010
7 = 11011
8 = 11000
9 = 11001
1's complement does have 0 and -0 - is that what you're after?
CDC used to produce 1's complement machines which made negation very easy as you suggest. As I understand it, it also allowed them to produce hardware for subtraction that didn't infringe on IBM's patent on the 2's complement binary subtractor.