What happened to the sign of the signed value when shifted to the left? - unsigned

I have read in the book, The Art of Assembly Language, that the highest order bit in a signed integer represents the sign bit or the sign of the integer. What happened to the sign bit if you shift the binary for of integer to the left?
For example, consider bit 10101010, (signed), if you shift left it, it will become 01010100, the sign will change after.

Related

Point of checkdigits in MRZs?

Not sure if this is the right subreddit to ask this question, but I will give it a shot. There is the ICAO standard for Machine Readable Zones as described here https://en.wikipedia.org/wiki/Machine-readable_passport. I don't see the point for check digits there.
If I have F instead of 5 for example in the MRZ code somewhere in the second line for example, all the checkdigits will be the same. What is the point in the first place for those check digits in the ICAO standard? Especially I don't see the point of the last check digits calculation since you could also calculate it by using the check digits from the second line and not all the letters/numbers.
Could someone explain why we need those checkdigits?
To be fair. This is not a subreddit. Anyway, there are multiple reasons that there are check digits inside the MRZ. The first reason is that automatic readers can check if the code is read well enough. The second reason is that it prevents a lot of fraud and identification theft. Some people that alter their travel documents do not know that there are check digits in place. So some people will get caught because they fail to edit the numbers.
Some countries now include PDF417 barcodes and/or QR-codes to reach better reads by machines. But keep in mind that not all governments/countries have access to high-tech devices, so the machine readable zone is still mandatory for a check with the naked eye.
Source: I work for a travel document verification company.
MRZ check digits are calculated on subsections of the entire MRZ. Each calculation serves as a check for each section. A final check digit is calculated on the sum of each sections and this digit serves as a double check of the individual check.
The below have same check digit of 8:
123456780
128456785
Whereas the subsection check digit matched after the tampering but the final check digit will detect this. Therefore, the final check digit adds additional robustness.
Although, I am wondering whether this visual check digit is mandatory because an eMRTD NFC chip BAC protocol also does a much stronger cryptographic check of the MRZ value.
UPDATES: My original claim that the composite check digit adds robustness to tampering is incorrect. Given the below TD1 MRZ:
IDSLV0012345678<<<<<<<<<<<<<<<
9306026F2708252SLV<<<<<<<<<<<4
JOHN<SMEAGOL<<WENDY<LIESSETTEF
An OCR scanner can either gave 0012345678 or OO12345678 for the document number portion and all check digits passes including the composite check digit. But there is no way to tell which document number is correct. It seems that an MRZ check digit has edge cases that cannot be helped.

Proof overflow with carry our and carry in

I have an exercise which I don't really understand.
Prove that in a 2's complement number system addition overflows if and only if the carry from the sign position does not equal the carry into the sign position. Consider the three cases: adding two positive numbers, adding two negative numbers and adding two numbers of opposite sign.
I know how to count when adding who numbers, and how to see if the addition overflows or not, by looking at the carry in and carry out
But how will I do this proof in a general way?
Since your question shows few details and you show no work of your own, I'll answer with few details.
For each of those three cases (two positives, two negatives, one of each), consider the four sub-cases (carry into and out of the sign bit, carry into but not out of, carry out of but not into, no carry at all). In each case, show that some of those sub-cases are not possible. Then look at each sub-case and see if it means overflow.
Let's look at the first case--two positive numbers. First show that it is not possible for any carry out of the sign bit, so that removes two sub-cases. Then show that a carry into the sign bit (but not out of) is an overflow condition, and that no carry into the sign bit (and none out of) is not an overflow condition.
Then in each and every sub-case you will see that overflow happens when the two carries (in to and out of) differ and overflow does not happen when the two carries are equal.
This may not be the "general way" you were looking for, since you need to consider twelve combinations of cases and sub-cases, eliminating some and looking at the consequences of the others. But it does work. If you want more details, show more work of your own and I will be glad to add more.

Firebase transaction precision loss

In Firebase real time database i have a path like this ../funds/balance with a value of 10.
I have a function doing a transaction which decrease the balance by 2 cents.
The first time i remove 2 cents everything works and balance is saved has 9.98.
I do it one more time everything console log show i read 9.98 from the db and after data is updated firebase console show 9.96.
Next time i do it remember that Firebase console shows 9.96 but when i console log the value received in the transaction is currently 9.96000001.
Someone nows why.
This is a well-known property of floating-point representations.
The issue arises because, in this case, because 0.02 cannot be represented exactly as a floating-point number. It can be represented exactly in decimal (2x10^-1), but in binary it has a recurring "1001" (1.1001100110011001... x2^-3).
What you are seeing is analogous to representing 1/3 in decimal as 0.333. You expect that subtracting 1/3 from one 3 times would leave 0, but subtracting 0.333 from one 3 times leaves 0.001.
The solution is not to represent money as floating-point. In SQL you'd use a MONEY, NUMERIC or DECIMAL type or, more generally, you can represent money as an integer number of pennies (or hundreths of a penny, etc., according to your needs).
There's an article called "What Every Computer Scientist Should Know About Floating-Point Arithmetic" that you can find online that explains the issue.

HERE maps: difference between positive and negative link ids

What is the difference between positive and negative link ids. For example link ids 781299767 and -781299767 have the same address, but only -781299767 has a speed limit. This can be seen from http://route.st.nlp.nokia.com/routing/6.2/getlinkinfo.xml?linkIds=781299767,-781299767&app_id=DemoAppId01082013GAL%20&app_code=AJKnXv84fjrb0KIHawS0Tg
The documentation of LinkIdType defines the presence of the minus sign
"Permanent ID which references a network link. When presented with a minus sign
as the first character, this ID indicates that the link should be traversed in the
opposite direction of its default coding (for example, walking SW on a link that is coded as one-way traveling NE)."

Arithmetic operation in Assembly

I am learning assembly language. I find that arithmetic in assembly can be either signed or unsigned. Rules are different for both type of arithmetic and I find it is programmer's headache to decide which rules to apply. So a programmer should know beforehand if arithmetic involves the negative numbers or not. if yes, signed arithmetic rules should be used, else simpler and easier unsigned arithmetic will do.
Main problem I find with unsigned arithmetic is ‘what if result is larger than its storage area?’. It can be easily solved by using a bigger-than-required storage area for the data. But that will consume extra bytes and size of data segment would increase. If size of the code is no issue, can't we use this technique freely?
If you are the programmer, you are in control of your data representation within the bounds of the requirements of your software's target domain. This means you need to know well before you actually start touching code what type of data you are going to be dealing with, how it is going to be arranged (in the case of complex data types) and how it is going to be encoded (floating-point/unsigned integer/signed integer, etc.). It is "safest" to use the operations that match the type of the data you're manipulating which, if you've done your design right, you should already know.
It's not that simple. Most arithmetic operations are sign agnostic: they are neither signed nor unsigned.
The interpretation of the result—which is determined by program specification—is what makes them signed or unsigned, not the operation itself. The proper flavor of compare instructions always have to be chosen carefully.
In some CPU architectures there are distinct signed and unsigned divide instructions, but that is about as far as it goes. Most CPUs have arithmetic shift right instruction flavors which either preserve the high bit or replace it with zero: that can be used as signed and unsigned handling, respectively.

Resources