How to Build finite state machine that show modulus 4 in binary - fsm

Can someone show me how to build a finite state machine that shows modulus 4 in binary?

Well, a binary number mod 4 is going to be 0 if the last two bits are 00, so that's where you'll want to start. Just think what adding another 1 or 0 to that will do to the last two digits, and do that for each possible state.

I'll leave you with this (big) hint: think about how many possible results you can have in modulus-4. Once you know that, you'll know how many states your machine can have.

Related

Arduino float - 6 decimal numbers

I wonder if anyone has a good solution for decimal numbers.
I use a Arduino Mega, and try to convert a float with 6 numbers after decimal point. When I try, I get 5 numbers correct, but not number 6. The 6 number is either not counted, or shown as 0. I have tried a lot of different things, but it always end up showing 5 numbers correct, but not 6.
Do anyone has a solution for this?
Appriciate all help
In general, you can use scaled integer forms of floating-point numbers to preserve accuracy.
Specifically, if these are lat/lon values from a GPS device, you might be interested in my NeoGPS. Internally, it uses uses 32-bit integers to maintain 10 significant digits. As you have discovered, most libraries only provide the 6 or 7 digits because they use float.
The example NMEAloc.ino shows how to print the 32-bit integers as if they were floating-point values. It just prints the decimal point at the right place.
The NeoGPS distance and bearing calculations are also careful to perform math operations in a way that maintains that accuracy. The results are very good at small distances/bearings, unlike all other libraries that use the float type in naive calculations.
4 byte floats can hold 6 significant digits.
8 byte doubles can hold 15.
You need to use doubles to get the precision you want.
info on 4 byte floats

Why adding two big integers will get a negative one?

I have tried these in my own computer.
eg. 2000000000+2000000000=-12315555331
I don't know why this doesn't meet the standards, maybe because of the length.
So I'm writing this just to pass the check.
Because take longing as an example. The first bit out of the 32 is sign bit, which means the number is negative if it is 1 and positive if 0.

Maximizing Stored Information (Entropy?)

So I'm not sure if this question belongs here or maybe Math overflow. In any case, my question is about information theory.
Let's say I have a 16 bit word. There are 65,536 unique configurations of 1's and 0's in that number. What each one of those configurations represents is unimportant as depending on your notation (2's complement vs signed magnitude etc.) the same configuration can mean different things.
What I'm wondering is are there any techniques to store more information than that in a 16 bit word?
My original ideas were like odd/even parity or something but then I realized that's already determined by the configuration... i.e. there is no extra information encoded in that. I'm beginning to wonder if no such thing exists.
EDIT For example, let's say some magical computer (thinking quantum or something here) could understand 0,1,a. Then obviously we have 3^16 configurations and can now store more than the numbers [0 - 65,536]. Are there any other properties of a 16 bit word that you can mess with in order to encode extra information in your bit stream?
EDIT2 I am really struggling to put this into words. Right now when I look at a 16 bit word in the computer, the property which conveys information to me the relative ordering of individual 1's and 0's. Is there another property or way of looking at a 16 bit word which would allow more than 2^16 unique "configurations"? (Note it would no longer be a configuration, but 2^16 xxxx's where xxxx is a noun describing an instance of that property). The only thing I can really think of is something like if we looked at the number of 1 to 0 transitions or something rather than whether each bit was actually a 1 or 0? Now transitions does not yield more than 2^16 combinations because it is ultimately solely dependent on the configuration of 1's and 0's. I'm looking for properties that would derive from the configuration of 1's and 0's AND something else thus resulting in MORE than 2^16. Does anyone even know what this would be called if it did exist?
EDIT3 Ok I got it. My question boils down to this: How do we prove that the configuration of 1's and 0's in a word completely defines it? I.E. How do we prove that you need no other information besides the bitmap to show equality between two 16 bit words?
FINAL EDIT
I have an example... If instead of looking at the presence of 1's and 0's we look at transition between bits we can store 2^16 alphabet characters. If the bit to left is the same, treat it as a 1, if it transitions, treat it as a 0. Using the 16 bit word as a circularly linked list type structure where each link represent 0/1 we basically for a 16 bit word out of the transition between bits. That is an exact example of what I was looking for but that results in 2^16, nothing better. I am convinced that you cannot do better and am marking the correct answer =(
The amount of information in a particular configuration of 16 0/1s is determined by the probability of this configuration (this is called self-information). This can be bigger than 16 bits if the configuration is less likely than 1/(2^16), but that means that some other configurations are more likely than 1/(2^16) and so will contain less information than 16 bits.
To take into account all the possible configurations, you have to use the expected value of self-information (called entropy) of individual configurations. This value will reach its maximum when the probabilities of all configurations are equal (that is 1/(2^16)) and then it will be exactly 16 bits.
So the answer is no, you cannot store more than 16 bits of information in 16 0/1s.
See
http://en.wikipedia.org/wiki/Information_theory
http://en.wikipedia.org/wiki/Self-information
EDIT It is important to realize that bit does not stand for 0 or 1, but it is a unit of information, that is -log_2 P(w) where P(w) is the probability of a particular configuration.
You cannot store more than 2 states in one digit of a semiconductor device. You answered it yourself. The only way more information can be fitted into 16 digits is if each digit were to have many possible values.

Actionscript 3 Math inconsistencies

I'm trying to build a calculator in Flex / actionscript 3 but have some weird results using the class Math :
trace(1.4 - .4); //should be 1 but it is 0.9999999999999999
trace(1.5 - .5); //should be 1 and it is 1
trace(1.444 - .444); //should be 1 and it is 1
trace(1.555 - .555); //should be 1 but it is 0.9999999999999999
I know there are some issues with floating point numbers, but as you can see, it should at least fail for all of my examples, am I right?
How the problem is solved in other calculators and how should I proceed in order to build a usable calculator in Actionscript 3 please?
Thank you in advance,
Adnan
Welcome to IEEE 754 floating point. Enjoy the inaccuracies. Use a fixed-point mechanism if you want to avoid them.
Your results are to be expected, and will be observed in any programming language with a floating point datatype. Computers cannot accurately store all numbers, which causes edge cases like the ones you posted.
Read up on floating point accuracy problems at Wikipedia.
I would assume that most calculators display fewer decimal places than the precision of their floating point. Rounding to fewer decimal places than your level of precision should alleviate this sort of problem, but it won't solve all of the issues.

Why Base 2 with binary numbers?

History: I read from one of Knuth's algorithm book that first computers used the base of 10. Then, it switched to two's complement here.
Question: Why does the base could not be -2 in at least a monoid?
Examples:
(-2)^1 = -2
(-2)^3 = -8
The problem is that with a negabinary (base -2) system, it's more difficult to understand, and the number of possible positive and negative values are different. To see this latter point, consider a simple 3 bit case.
Here
the first (rightmost) bit represents the decimal 1;
the middle bit represents the decimal -2; and
the third (leftmost) bit represents the decimal 4
So
000 -> 0
001 -> 1
010 -> -2
011 -> -1
100 -> 4
101 -> 5
110 -> 2
111 -> 3
Thus the range of expressable values is -2 to 5, i.e. non-symmetric.
At its heart, digital logic is base two. A digital signal is either on or off. Supporting other bases (as in BCD) means wasted representation space, more engineering, more complex specification, etc.
Editted to add: In addition to the trivial representation of a single binary digit in digital logic, addition is easily realized in hardware, start half adder which is easily realized in Boolean logic (i.e. with transistors):
(No carry) (with carry)
| 0 1 0 1
--+--------------------
0 | 00 01 01 10
1 | 01 10 10 11
(the returned digit is (A xor B) xor C, and the carry is ((A and B) or (C and (A or B))) ) which are then chained together to generate a full register adder.
Which brings us to twos complement: negation is easy, and the addition of mixed positive and negative number follows naturally with no additional hardware. So subtraction comes almost for free.
Few other representations will allow arithmetic to be implemented so cheaply, and I know of none that are easier.
Optimization in storage and optimization in processing time are often at cross purposes with each other; all other things being equal, simplicity usually trumps complexity.
Anyone can propose any storage mechanism for information they wish, but unless there are processors or algorithms that support it, it won't get used.
There are two reasons to choose base 2 over base -2:
First, in a lot of applications you don't need to represent negative numbers. By isolating their representation to a single bit you can either expand the range of representable numbers, or reduce the storage space required when negative numbers aren't needed. In base -2 you need to include the negative values even if you clip the range.
Second, 2s complement hardware is simple to implement. Not only is simple to implement, it is super simple to implement 2s complement hardware that supports both signed and unsigned arithmetic, since they are the same thing. In other words, the binary representation of uint4(8) and sint4(-15) are the same, and the binary representation of uint(7) and sint4(7) are the same, which means you can do the addition without knowing whether or not it is signed, the values all work out either way. That means the HW can totally avoid knowing anything about signs and let it be dealt with as a language convention.
Also, the use of the binary system has a mathematical background. Consider the Information Theory by Claude Shannon . My english skills don't qualify to explain this topic, so better follow the link to wikipedia and enjoy the maths behind all this stuff.
In the end the decision was made because of voltage variance.
With base 2 it is on or off, no in between.
However with base 10 how do you know what each number is?
is .1 volts 1? What about .11? Voltage can vary and is not precise. Which is why an analog signal is not as good as a digital. This is if you pay more for a HDMI cable than $6 it is a waste, it is digital it gets there or not. Audio it does matter because the signal can change.
Please, see an example of the complexity that dmckee pointed out without examples. So you can see an example, the numbers 0-9:
0 = 0
1 = 1
2 = 110
3 = 111
4 = 100
5 = 101
6 = 11010
7 = 11011
8 = 11000
9 = 11001
1's complement does have 0 and -0 - is that what you're after?
CDC used to produce 1's complement machines which made negation very easy as you suggest. As I understand it, it also allowed them to produce hardware for subtraction that didn't infringe on IBM's patent on the 2's complement binary subtractor.

Resources