Divide number by 2 using only addition and subtraction - math

I want to implement division by 2 using only + and -(preferably -). I know I could divide just by repeatedly subtracting, but this solution would be extremely slow for bigger numbers. I hope someone can help me with that.
My numbers can be anything from 0 to 255. I can add and subtract(suprisingly) and check if numbers are equal/greater/less etc. but I'd use comparision only when it would be unavoidable.
If addition overflows result will be 255. If subtraction underflows result will be 0.

Related

Why reverse the remainder during decimal to binary conversion?

There is a particular method of converting a decimal (with a decimal point, like xx.xx) to a binary number. It is detailed here: https://www.geeksforgeeks.org/convert-decimal-fraction-binary-number/
I can apply this process but am having trouble understanding WHY it works.
Basically, it calculates the left side of the decimal point separately from the right side - this part I have no issue with.
For example, if we have 6.9, it will start by calculating the left side: 6.
6 divided by 2 gives us 3, with a remainder of 0.
3 divided by 2 gives us 1, with a remainder of 1.
1 divided by 2 gives us 0, with a remainder of 1.
For some reason, it now takes the REVERSE of this, which is 110, and this magically becomes 6.
I don't understand why the remainder of the least significant division (1 divided by 2) is now used in the most significant bit of the answer, and this somehow works.
Similarly confused about why the method for the right hand side works.
Does anyone have some intuition they can share about this particular process of converting decimals to binaries? Again, I have no problem performing the calculation as the computation is quite easy. I simply don't understand why this works.
Think of it like this :
A binary representation b_n, b_(n-1), .., b_0 (least significant bit on the right) represents the number
k = b_n*2^n + b_(n-1)*2^(n-1) + ... + b_0*2^0 (remember that 2^0 is just 1).
To get the least significant bit, you want to know whether this number divides evenly into 2's, because if it doesn't then you know that b_0 == 1 because all the other terms surely divide evenly, as they all have some positive power of 2 in front. Thus the remainder from the division by two is b_0. Don't divide just yet, only get the remainder and write it down.
Now we would like to get rid of that last bit and start over again to get the next one. How can we do that? Simply divide k by two. Because then you get:
k/2 = b_n*2^(n-1) + b_(n-1)*2^(n-2) + ... + b_1*2^0 (Divide each term in the sum by 2, thus decreasing the power. The last term disappears because it was either 0 or 1, which both give 0 when divided by 2)
Or written in binary (without the powers of two) : b_n, b_(n-1), .., b_1.
Now we get a new number which is simply the same as before where the least significant bit has been thrown away and everything shifted to the right. So we can start this whole process again with k/2 to get b_1. And then b_2. And so on.
Here I separated getting the remainder and dividing to make it clearer, but you can do them at the same time if you want to, it's the same thing.
I hope you see how, during this process, we get the bits from right to left, which is why you want to flip the whole thing in the end if you have been writing them down from left to right.

what is my increment percentage from 0 to 20?

Suppose, I have a pen, which initially cost 0 $, now it costs 20$.
What is the increment percentage?
Ideally, it will cause the divide by zero exception.
There are things which happen practically first and they are made part of study. Then, there are things which are believed, experimented with, and set in theory and later proven practically. Mathematics deals with both of them. This is a mathematical question, not astronomical. This is not a planetary distance calculation whose actual value is unknown and even current value is unknown. It's not like part of unknown is unknown, but which part, nobody knows.
A pen does exist. It is not virtual. It costs nothing. Why? May have been free with something, but it does have an actual price which is not known. So, there is a definite price increase from 0 $ to 20 $. That means, 20 - 0 is the increase.
You cannot divide ultimately by 0 because you cannot divide 20 into any number of fragments of 0. So, 0 raise to power n will always remain 0 and never reach 20.
Infinity hence cannot be the answer as well since 0^infinity is zero. That is why, in such cases, you can accept the numerator and denominator can be set to 1, to replace the unknown actual price of the pen.
Dividing by zero is infinity, so in such questions which need definitive calculation and not infinity answer, 0 is changed to 1 for division, not for subtraction, to let the numerator decide the increase factor. Infinity is convenience here, not the answer. The earlier actual price of the pen is indeterminate, so you cannot divide by it.
So, denominator can be treated as Case number WHEN 0 THEN 1 ELSE number. So, it will be ((20-0) * 100)/1 = 2000%. So, price has raised by 2000% for the end consumer of the pen.
How You find increment percentage if its initial cost is zero? it should be greater than zero
For example
initial cost 10
New cost 20
then
(20-10)/10 *100
So
100 % is increment %
The increment percentage is undefined because you can't divide by zero.
The formula is (new-old)/old and this turns into 20/0

rounding to the nearest zero, bitwise

I just wonder how can i round to the nearest zero bitwise? Previously, I perform the long division using a loop. However, since the number always divided by a number power by 2. I decide to use bit shifting. So, I can get result like this:
12/4=3
13/4=3
14/4=3
15/4=3
16/4=4
can I do this by performing the long division like usual?
12>>2
13>>2
if I use this kind of bit shifting, are the behavior different for different compiler? how about rounding up? I am using visual c++ 2010 compiler and gcc. thx
Bitwise shifts are equivalent to round-to-negative-infinity divisions by powers of two, meaning that the answer is never bigger than the unrounded value (so e.g. (-3) >> 1 is equal to -2).
For non-negative integers, this is equivalent to round-to-zero.

How do you perform floating point arithmetic on two floating point numbers?

Suppose I wanted to add, subtract, and/or multiply the following two floating point numbers that follow the format:
1 bit sign
3 bit exponent (bias 3)
6 bit mantissa
Can someone briefly explain how I would do that? I've tried searching online for helpful resources, but I haven't been able to find anything too intuitive. However, I know the procedure is generally supposed to be very simple. As an example, here are two numbers that I'd like to perform the three operations on:
0 110 010001
1 010 010000
To start, take the significand encoding and prefix it with a “1.”, and write the result with the sign determined by the sign bit. So, for your example numbers, we have:
+1.010001
-1.010000
However, these have different scales, because they have different exponents. The exponent of the second one is four less than the first one (0102 compared to 1102). So shift it right by four bits:
+1.010001
- .0001010000
Now both significands have the same scale (exponent 1102), so we can perform normal arithmetic, in binary:
+1.010001
- .0001010000
_____________
+1.0011000000
Next, round the significand to the available bits (seven). In this case, the trailing bits are zero, so the rounding does not change anything:
+1.001100
At this point, we could have a significand that needed more shifting, if it were greater than 2 (102) or less than 1. However, this significand is just where we want it, between 1 and 2. So we can keep the exponent as is (1102).
Convert the sign back to a bit, take the leading “1.” off the significand, and put the bits together:
0 110 001100
Exceptions would arise if the number overflowed or underflowed the normal exponent range, but those did not happen here.

When iterating through a set of numbers, will time increase at a constant exponential rate

Hello good people of stackoverflow, this is a conceptual question and could possibly belong in math.stackexchange.com, however since this relates to the processing speed of a CPU, I put it in here.
Anyways, my question is pretty simple. I have to calculate the sum of the cubes of 3 numbers in a range of numbers. That sounds confusing to me, so let me give an example.
I have a range of numbers, (0, 100), and a list of each numbers cube. I have to calculate each and every combination of 3 numbers in this set. For example, 0 + 0 + 0, 1 + 0 + 0, ... 98^3 + 99^3 + 100^3. That may make sense, I'm not sure if I explained it well enough.
So anyways, after all the sets are computed and checked against a list of numbers to see if the sum matches with any of those, the program moves on to the next set, (100, 200). This set needs to compute everything from 100-200 + 0-200 + 0-200. Than (200, 300) will need to do 200 - 300 + 0 - 300 + 0 - 300 and so on.
So, my question is, depending on the numbers given to a CPU to add, will the time taken increase due to size? And, will the time it takes for each set exponentially increase at a predictable rate or will it be exponential, however not constant.
The time to add two numbers is logarithmic with the magnitude of the numbers, or linear with the size (length) of the numbers.
For a 32-bit computer, numbers up to 2^32 will take 1 unit of time to add, numbers up to 2^64 will take 2 units, etc.
As I understand the question you have roughly 100*100*100 combinations for the first set (let's ignore that addition is commutative). For the next set you have 100*200*200, and for the third you have 100*300*300. So it looks like you have an O(n^2) process going on there. So if you want to calculate twice as many sets, it will take you four times as long. If you want to calculate thrice as many, it's going to take nine times as long. This is not exponential (such as 2^n), but usually referred to as quadratic.
It depends on how long "and so on" lasts. As long as you maximum number, cubed, fits in your longest integer type, no. It always takes just one instruction to add, so it's constant time.
Now, if you assume an arbitrary precision machine, like say writing these numbers on the tape of a turing machine in decimal symbols, then adding will take a longer time. In that case, consider how long it would take? In other words, think about how the length of a string of decimal symbols grows to represent a number n. It will take time at least proportional to that length.

Resources