Designing and bitwise, 4 bits 2 inputs a and b - digital-design

What is the logic of implementing truth table that do bitwise and of two inputs each is 4 bits or how many functions will be output i just need one example please .

You can use an online Python coding playground to find the answers to your truth table.
Here is a link for one: Python Online
Inside this you can play around with your inputs and then 'execute it' to see the output.
Here are two examples for you to type in. After you type this in, click 'execute' and wait for the results to appear on the right.
inputA = 0000
inputB = 0000
print(inputA & inputB)
>>> 0
inputA2 = 1110
inputB2 = 1111
print(inputA2 & inputB2)
>>> 1110
You can try to think of a more efficient way to find these values but for now you have a brute-force way to get all of the truth table values by manually changing them.
You can find other bitwise operations for Python here: Bitwise Operations
To answer the second part, if you have two 4 bit inputs, you can have up to 256 outputs (2^4 * 2^4)

Related

asymmetric difference betwen two binary numbers (bitsets)

It is quick and easy to determine shared/different bits between two binary numbers by AND or XOR. Let's say we have A: 10011 and B:11001 we can get the difference.
10011 XOR 11001 = 01010 (1s are different 0 similar.)
Are there any quick and easy logic or arithmetic operations that could produce similar but asymmetric output (1s showing these that are for example present in A but missing in B or vice vs.)
Example 10011 ??? 11001 = 00010 (1s mean present in left hand operand missing in right)
Could it be done with some quick arithmetic/logic or would I have to start some loop to go through the comparisons one by one?
I got to this question when I was contemplating on storing some presence/absence data in bytes as bit flags (for memory efficiency) -- and were already gleeful in the fact that were I do thusly I could then do quick and easy data diffing operations, but for many applications the direction of difference is also important.
The more canonical way to express this is A AND (NOT B), where NOT flips all bits.
tkausl - comment on the question answers it successfully.
(A XOR B) AND A would really do the trick.
XOR would generate the difference between A and B and then AND mask with A to show only these that are present in A. Result difference showing these bits that are set in A but not in B.

Arduino - 5 questions for real Wire.write() and Wire.read() explanation

I Googled this a lot, and it seems that I am not the only one having problems with really understanding Wire.write() and Wire.read(). Being novice, I almost never use libraries that are already written by somebody, I try to create my class for module in order to truly understand how this module works and to learn how to manipulate with it. I've read few books and too many tutorials, but I could summarise these in two:
a) all tutorials are just showing the very basics of how to use these methods and b) they don't actually explain the steps, like everything is totally self explanatory. Call me stupid, but I have the feeling like somebody told me that 1 + 1 = 2, and then gave me some polynomial equation to solve :(
All book examples and almost all tutorials look like this imaginary example:
Wire.beginTransmission(Module_Address); //Use this to start transmission
Wire.write(0); // go to first register
Wire.endTransmission(); // end this
//To read
Wire.requestFrom(Module_Address, 3); //Read three registers
Wire.read(); //Read first register
Wire.read(); //Read second register
Wire.read(); //Read third register
And that's it about reading.
When it comes to writing, it's even worse:
Wire.beginTransmission(Module_Address); //Use this to start transmission
Wire.write(0); // go to first register
Wire.write(something); //Write to first register
Wire.write(something); //write to second register
Wire.endTransmission(); // end this
So far, working with ANY module I got, it was NEVER that easy. Usually, every register has more than one "option" inside. For example, lets say that imaginary module has First read register like this:
ADDRESS | BIT 7 | BIT 6 | BIT 5 | BIT 4 | BIT 3 | BIT 2 | BIT 1 | BIT 0
data Byte1 | mute .| option2.........|.....................option3.......................
To read only option 3, I would use this code:
Wire.beginTransmission(module_address);
Wire.write(0);
Wire.endTransmission();
Wire.requestFrom(module_address, 1);
byte readings = Wire.read() & 0x1F; //0x1F is hexadecimal of binary 0001111 for option 3 in register 1
QUESTION 1
What does this '&' after Wire.read() REALLY means? (I know that it points to option within register, but I do not really understand this, why is it there)
QUESTION 2
Why the previous problem isn't written anywhere? So many tutorials, so many books, but I "discovered" it by accident when I tried to figure out how one library was working.
QUESTION 3
Imagine that hypothetical module has third register in write mode looking like this:
ADDRESS | BIT 7 | BIT 6 | BIT 5 | BIT 4 | BIT 3 | BIT 2 | BIT 1 | BIT 0
data Byte3 | write flag.......| option2.........|......................option3........
How to write flag without affecting option 2 and option 3? Or in other words, how to write to register 3's write flag? If I take 11000000 could affect because maybe I do not know what exactly option 2 and 3 do, or I do not wish to interfere with default setup.
QUESTION 4
Certain modules have to be written in binary-coded decimal. Let's say, that you have a timer and you wish to set 17 seconds for countdown to 0.And to do that, you need to write number 17 to register one, but number should be binary-coded decimal. 17 as binary-coded is: 0001 0111. But when you do this:
Wire.beginTransmission(module_address);
Wire.write(0);
Wire.write(00010111);
Wire.endTransmission();
You get different number, 13 or 10 (can't recall what number, I know it was wrong).
However, when doing this conversion: 17/10*16 + 17%10 it writes correct number 17.
Yes, I also accidentally found this out. BUT, where is this equation from? I searched (obviously wrong) as much as I could, but there was nothing about it. So, how did somebody come with this equation?
QUESTION 5
Probably a dumb off-topic question, BUT:
should Arduino library be written in a way that others could find it difficult to figure out the idea behind it? In other words, to figure out what the developer was exactly doing? I remember that one person used a lot of messy code to read something from sensor and then formula to convert it from binary-coded decimal to print it to Serial Monitor, while the same thing could be done with simply
Serial.print(read_byte, HEX);
It's not that I am smarter (or better) than them, I just don't understand why somebody would write a complex code when there is no(really) need for that.
Thanks a lot for any help :)
Questions 1. - 4.: Are all covered by Bit Manipulation tutorial on AVRFreaks forum Tutorials. So in short:
1) The & is used for bit masking in this case.
2) If you look for "Bit manipulation" then there are loads of Tutorials.
3) It's possible by Bit manipulation. How? For in memory variable just use bit masking. To clear two bits: var &= 0b00111111 to set two bits: var |= 0b11000000. If you don't have register value, you have to Read & Modify & Write it back. If you can't read the value (for example it's internal address like for eeproms) you have to have this value in memory anyways.
4) In C++ numbers starting by zero are in Octal base. If you want binary, you have to use 0b00010111. For the HEX base you have to use 0xFF. This is not explicitly mentioned in that tutorial, but both are used here.
5) It should be as clear as possible. But for the begginers without good knowlege of C++ it's hard anyway. For me is most difficult to read the code without indentation or even worst with bad indentation, bad variable names. The libraries are usually written by advanced users, so it's not so hard to understand with some background like knowing the datasheet for used MCUs and so on.
BTW: wrong comments are also bad for understanding:
//0x1F is hexadecimal of binary 0001111 for option 3 in register 1
The value 0x1F is definitely not 0001111 in binary but 00011111 (or better: 0b00011111)

Finding similar hashes

I'm trying to find 2 different plain text words that create very similar hashes.
I'm using the hashing method 'whirlpool', but I don't really need my question to be answered in the case or whirlpool, if you can using md5 or something easier that's ok.
The similarities i'm looking for is that they contain the same number of letters (doesnt matter how much they're jangled up)
i.e
plaintext 'test'
hash 1: abbb5 has 1 a , 3 b's , one 5
plaintext 'blahblah'
hash 2: b5bab must have the same, but doesnt matter what order.
I'm sure I can read up on how they're created and break it down and reverse it, but I am just wondering if what I'm talking about occurs.
I'm wondering because I haven't found a match of what I'm explaining (I created a PoC to run threw random words / letters till it recreated a similar match), but then again It would take forever doing it the way i was dong it. and was wondering if anyone with real knowledge of hashes / encryption would help me out.
So you can do it like this:
create an empty sorted map \
create a 64 bit counter (you don't need more than 2^63 inputs, in all probability, since you would be dead before they would be calculated - unless quantum crypto really takes off)
use the counter as input, probably easiest to encode it in 8 bytes;
use this as input for your hash function;
encode output of hash in hex (use ASCII bytes, for speed);
sort hex on number / alphabetically (same thing really)
check if sorted hex result is a key in the map
if it is, show hex result, the old counter from the map & the current counter (and stop)
if it isn't, put the sorted hex result in the map, with the counter as value
increase counter, goto 3
That's all folks. Results for SHA-1:
011122344667788899999aaaabbbcccddeeeefff for both 320324 and 429678
I don't know why you want to do this for hex, the hashes will be so large that they won't look too much alike. If your alphabet is smaller, your code will run (even) quicker. If you use whole output bytes (i.e. 00 to FF instead of 0 to F) instead of hex, it will take much more time - a quick (non-optimized) test on my machine shows it doesn't finish in minutes and then runs out of memory.

Twofish cipher key generation

I'm trying to figure out how Twofish cipher expanded key is generated.
For now, I have figured out that firstly one part of the key is generated, and used for whitening, and then in each round 2x32bit key part is generated.
The first part of the key is made up of 3 vectors, Mo, Me and S.
Mo and Me are generated simply by expanding the primary key to first defined length, and splitting it to even and odd 32b words, that are then put in the Mo and Me vectors.
Vector S on the other hand is made up of k words, where k = N/64, in case of N=128, S contains 2x32b words.
So, we got to Mo(2x32) + Me(2x32) + S(2x32) + 16x2x32 = 38 32bit words. 2 words are missing. Why??
And what if N=192, or N=256? How can the expanded key be 40 32bit words instead of 41 and 44?
Any help would be great.
Thanks
To answer my own question, I found out that vectors Mo, Me and S are not actually part of the expanded key, as I first thought, actually they are just used for generating expanded key and key-dependent S boxes. Part of the key that is used for whitening is generated the same way as the parts that are used in each round, using h function, just they are not used anywhere else except for whitening. Hope this saves some trouble to people with this kind of question. :)

How does the CPU do subtraction?

I have some basic doubts, but every time I sit to try my hands at interview questions, these questions and my doubts pop up.
Say A = 5, B = -2. Assuming that A and B are 4-bytes, how does the CPU do the A + B addition?
I understand that A will have sign bit (MSB) as 0 to signify a positive value
and B will have sign bit as 1 to signify a negative integer.
Now when in C++ program, I want to print A + B, does the addition module of the ALU (Arithmetic Logic Unit) first check for sign bit and then decide to do subtraction and then follow the procedure of subtraction. How subtraction is done will be my next question.
A = 5
B = 2
I want to do A - B. The computer will take 2's complement of B and add A + 2's complement of B and return this (after discarding the extra bit on left)?
A = 2
B = 5
to do A - B. How does the computer do in this case?
I understand that any if-then etc kind of conditional logic all will be done in hardware inside ALU. computing 2s complement etc,discarding extra bit all will be done in hardware inside ALU. How does this component of ALU look like?
The whole reason we use 2's-complement is that addition is the same whether the numbers are positive or negative - there are no special cases to consider, like there are with 1's-complement or signed-magnitude representations.
So to find A-B, we can just negate B and add; that is, we find A + (-B), and because we're using 2's-complement, we don't worry if (-B) is positive or negative, because the addition-algorithm works the same either way.
Think in terms of two or three bits and then understand that these things scale up to 32 or 64 or howevermany bits.
First, lets start with decimal
99
+22
===
In order to do this we are going to have some "Carry the one's" going on.
11
99
+22
===
121
9 plus 2 is 1 carry the one, 1 plus 9 plus 2 is 2 carry the one...
The point being though to notice that to add two numbers I actually needed three rows, for at least some of it I might need to be able to add three numbers. Same thing with an adder in an alu, each column or bit lane, single bit adder, needs to be able to add two inputs plus a carry in bit, and the output is a one bit result and a one bit carry.
Since you used 5 and 2 lets do some 4 bit binary math
0101
+0010
=====
0111
We didnt need a carry on this one, but you can see the math worked, 5 + 2 = 7.
And if we want to add 5 and -2
11
0101
+1110
=====
0011
And the answer is 3 as expected, not really surprising but we had a carry out. And since this was an add with a minus number in twos complement it all worked, there was no if sign bit then, twos complement makes it so we dont care just feed the adder the two operands.
Now if you want to make a subtle difference, what if you want to subtract 2 from 5, you select the subtract instruction not add. Well we all learned that negating in twos complement means invert and add one. And we saw above that a two input adder really needs a third input for carry in so that it can be cascaded to however wide the adder needs to be. So instead of doing two add operations, invert and add 1 being the first add the real add all we have to do is invert and set the carry in:
Understand that there is no subtract logic, it adds the negative of whatever you feed it.
v this bit is normally zero, for a subtract we set this carry in bit
11 11
0101 five
+1101 ones complement of 2
=====
0011
And what do you know we get the same answer...It doesnt matter what the actual values are for either of the operands. if it is an add operation you put a zero on the carry in bit and feed it to the adder. If it is a subtract operation you invert the second operand and put a one on the carry in and feed it to the same adder. Whatever falls out falls out. If your logic has enough bits to hold the result then it all works, if you do not have enough room then you overflow.
There are two kinds of overflow, unsigned, and signed. Unsigned is simple it is the carry bit. Signed overflow has to do with comparing the carry in bit on the msbit column with the carry out bit for that column. For our math above you see that the carry and carry out of that msbit column is the same, both are a one. And we happen to know by inspection that a 4 bit system has enough room to properly represent the numbers +5, -2, and +3. A 4 bit system can represent the numbers +7 down to -8. So if you were to add 5 and 5 or -6 and -3 you would get a signed overflow.
01 1
0101
+0101
=====
1010
Understand that the SAME addition logic is used for signed and unsigned math, it is up to your code not the logic to virtually define if those bits were considered twos complement signed or unsigned.
With the 5 + 5 case above you see that the carry in on the msbit column is a 1, but the carry out is a 0 that means the V flag, the signed overflow flag, will be set by the logic. At the same time the carry out of that bit which is the C flag the carry flag, will not be set. When thinking unsigned 4 bits can hold the numbers 0 to 15 so 5 + 5 = 10 does not overflow. But when thinking signed 4 bits can hold +7 to -8 and 5 + 5 = 10 is a signed overflow so the V flag is set.
if/when you have an add with carry instruction they take the SAME adder circuit and instead of feeding the carry in a zero it is fed the carry flag. Likewise a subtract with borrow, instead of feeding the carry in a 1 the carry in is either a 1 or 0 based on the state of the carry flag in the status register.
Multiplication is whole other story, binary makes multiplication much easier than when done with decimal math, but you DO have to have different unsigned and signed multiplication instructions. And division is its own separate beast, which is why most instruction sets do not have a divide. Many do not have a multiply because of the number of gates or clocks it burns.
You are a bit wrong in the sign bit part. It's not just a sign bit - every negative number is converted to 2's complement. If you write:
B = -2
The compiler when compiling it to binary will make it:
1111 1111 1111 1111 1111 1111 1111 1110
Now when it wants to add 5, the ALU gets 2 numbers and adds them, a simple addition.
When the ALU gets a command to subtract it is given 2 numbers - it makes a NOT to every bit of the second number and makes a simple addition and adds 1 more (because 2's complement is NOT to every bit +1).
The basic thing here to remember is that 2's complement was selected for exactly the purpose of not having to make 2 separate procedures for 2+3 and for 2+(-3).
does the addition module of ALU (Arithmetic Logic Unit) first check for sign bit and then decide to do subtraction and then follow the procedure of subtraction
No, in one's and two's complement there's no differentiation between adding/subtracting a positive or negative number. The ALU works the same for any combination of positive and negative values
So the ALU is basically doing A + (-B) for A - B, but it doesn't need a separate negation step. Designers use a clever trick to make adders do both add and sub in the same cycle length by adding only a muxer and a NOT gate along with the new input Binvert in order to conditionally invert the second input. Here's a simple ALU example which can do AND/OR/ADD/SUB
Computer Architecture - Full Adder
The real adder is just a box with the plus sign inside ⊞ which adds a with b or ~b and carry in, producing the sum and carry out. It works by realizing that in two's complement -b = ~b + 1, so a - b = a + ~b + 1. That means we just need to set the carry in to 1 (or negate the carry in for borrow in) and invert the second input (i.e. b). This type of ALU can be found in various computer architecture books like
Digital Design and Computer Architecture
Computer Organization and Design MIPS Edition: The Hardware/Software Interface
Computer Organization and Design RISC-V Edition: The Hardware Software Interface
In one's complement -b = ~b so you don't set the carry in when you want to subtract, otherwise the design is the same. However two's complement has another advantage: operations on signed and unsigned values also work the same, so you don't even need to distinguish between signed and unsigned types. For one's complement you'll need to add the carry bit back to the least significant bit if the type is signed
With some simple simple modification to the above ALU they can now do 6 different operations: ADD, SUB, SLT, AND, OR, NOR
CSE 675.02: Introduction to Computer Architecture
Multiple-bit operations are done by concatenating multiple single-bit ALUs above. In reality ALUs are able to do a lot more operations but they're made to save space with the similar principle
In 2's-complement notation: not B = -B -1 or -B = (not B) + 1. It can be checked on a computer or on paper.
So A - B = A + (not B) + 1 which can be performed with:
1 bitwise not
1 increment
1 addition
There's a trick to inefficiently increment and decrement using just nots and negations.
For example if you start with the number 0 in a register and perform:
not, neg, not, neg, not, neg, ... the register will have values:
-1, 1, -2, 2, -3, 3, ...
Or as another 2 formulas:
not(-A) = A - 1
-(not A) = A + 1

Resources