what is my increment percentage from 0 to 20? - percentage

Suppose, I have a pen, which initially cost 0 $, now it costs 20$.
What is the increment percentage?
Ideally, it will cause the divide by zero exception.

There are things which happen practically first and they are made part of study. Then, there are things which are believed, experimented with, and set in theory and later proven practically. Mathematics deals with both of them. This is a mathematical question, not astronomical. This is not a planetary distance calculation whose actual value is unknown and even current value is unknown. It's not like part of unknown is unknown, but which part, nobody knows.
A pen does exist. It is not virtual. It costs nothing. Why? May have been free with something, but it does have an actual price which is not known. So, there is a definite price increase from 0 $ to 20 $. That means, 20 - 0 is the increase.
You cannot divide ultimately by 0 because you cannot divide 20 into any number of fragments of 0. So, 0 raise to power n will always remain 0 and never reach 20.
Infinity hence cannot be the answer as well since 0^infinity is zero. That is why, in such cases, you can accept the numerator and denominator can be set to 1, to replace the unknown actual price of the pen.
Dividing by zero is infinity, so in such questions which need definitive calculation and not infinity answer, 0 is changed to 1 for division, not for subtraction, to let the numerator decide the increase factor. Infinity is convenience here, not the answer. The earlier actual price of the pen is indeterminate, so you cannot divide by it.
So, denominator can be treated as Case number WHEN 0 THEN 1 ELSE number. So, it will be ((20-0) * 100)/1 = 2000%. So, price has raised by 2000% for the end consumer of the pen.

How You find increment percentage if its initial cost is zero? it should be greater than zero
For example
initial cost 10
New cost 20
then
(20-10)/10 *100
So
100 % is increment %

The increment percentage is undefined because you can't divide by zero.
The formula is (new-old)/old and this turns into 20/0

Related

Divide number by 2 using only addition and subtraction

I want to implement division by 2 using only + and -(preferably -). I know I could divide just by repeatedly subtracting, but this solution would be extremely slow for bigger numbers. I hope someone can help me with that.
My numbers can be anything from 0 to 255. I can add and subtract(suprisingly) and check if numbers are equal/greater/less etc. but I'd use comparision only when it would be unavoidable.
If addition overflows result will be 255. If subtraction underflows result will be 0.

Why a/0 returns Inf instead of NaN in R? [duplicate]

I'm just curious, why in IEEE-754 any non zero float number divided by zero results in infinite value? It's a nonsense from the mathematical perspective. So I think that correct result for this operation is NaN.
Function f(x) = 1/x is not defined when x=0, if x is a real number. For example, function sqrt is not defined for any negative number and sqrt(-1.0f) if IEEE-754 produces a NaN value. But 1.0f/0 is Inf.
But for some reason this is not the case in IEEE-754. There must be a reason for this, maybe some optimization or compatibility reasons.
So what's the point?
It's a nonsense from the mathematical perspective.
Yes. No. Sort of.
The thing is: Floating-point numbers are approximations. You want to use a wide range of exponents and a limited number of digits and get results which are not completely wrong. :)
The idea behind IEEE-754 is that every operation could trigger "traps" which indicate possible problems. They are
Illegal (senseless operation like sqrt of negative number)
Overflow (too big)
Underflow (too small)
Division by zero (The thing you do not like)
Inexact (This operation may give you wrong results because you are losing precision)
Now many people like scientists and engineers do not want to be bothered with writing trap routines. So Kahan, the inventor of IEEE-754, decided that every operation should also return a sensible default value if no trap routines exist.
They are
NaN for illegal values
signed infinities for Overflow
signed zeroes for Underflow
NaN for indeterminate results (0/0) and infinities for (x/0 x != 0)
normal operation result for Inexact
The thing is that in 99% of all cases zeroes are caused by underflow and therefore in 99%
of all times Infinity is "correct" even if wrong from a mathematical perspective.
I'm not sure why you would believe this to be nonsense.
The simplistic definition of a / b, at least for non-zero b, is the unique number of bs that has to be subtracted from a before you get to zero.
Expanding that to the case where b can be zero, the number that has to be subtracted from any non-zero number to get to zero is indeed infinite, because you'll never get to zero.
Another way to look at it is to talk in terms of limits. As a positive number n approaches zero, the expression 1 / n approaches "infinity". You'll notice I've quoted that word because I'm a firm believer in not propagating the delusion that infinity is actually a concrete number :-)
NaN is reserved for situations where the number cannot be represented (even approximately) by any other value (including the infinities), it is considered distinct from all those other values.
For example, 0 / 0 (using our simplistic definition above) can have any amount of bs subtracted from a to reach 0. Hence the result is indeterminate - it could be 1, 7, 42, 3.14159 or any other value.
Similarly things like the square root of a negative number, which has no value in the real plane used by IEEE754 (you have to go to the complex plane for that), cannot be represented.
In mathematics, division by zero is undefined because zero has no sign, therefore two results are equally possible, and exclusive: negative infinity or positive infinity (but not both).
In (most) computing, 0.0 has a sign. Therefore we know what direction we are approaching from, and what sign infinity would have. This is especially true when 0.0 represents a non-zero value too small to be expressed by the system, as it frequently the case.
The only time NaN would be appropriate is if the system knows with certainty that the denominator is truly, exactly zero. And it can't unless there is a special way to designate that, which would add overhead.
NOTE:
I re-wrote this following a valuable comment from #Cubic.
I think the correct answer to this has to come from calculus and the notion of limits. Consider the limit of f(x)/g(x) as x->0 under the assumption that g(0) == 0. There are two broad cases that are interesting here:
If f(0) != 0, then the limit as x->0 is either plus or minus infinity, or it's undefined. If g(x) takes both signs in the neighborhood of x==0, then the limit is undefined (left and right limits don't agree). If g(x) has only one sign near 0, however, the limit will be defined and be either positive or negative infinity. More on this later.
If f(0) == 0 as well, then the limit can be anything, including positive infinity, negative infinity, a finite number, or undefined.
In the second case, generally speaking, you cannot say anything at all. Arguably, in the second case NaN is the only viable answer.
Now in the first case, why choose one particular sign when either is possible or it might be undefined? As a practical matter, it gives you more flexibility in cases where you do know something about the sign of the denominator, at relatively little cost in the cases where you don't. You may have a formula, for example, where you know analytically that g(x) >= 0 for all x, say, for example, g(x) = x*x. In that case the limit is defined and it's infinity with sign equal to the sign of f(0). You might want to take advantage of that as a convenience in your code. In other cases, where you don't know anything about the sign of g, you cannot generally take advantage of it, but the cost here is just that you need to trap for a few extra cases - positive and negative infinity - in addition to NaN if you want to fully error check your code. There is some price there, but it's not large compared to the flexibility gained in other cases.
Why worry about general functions when the question was about "simple division"? One common reason is that if you're computing your numerator and denominator through other arithmetic operations, you accumulate round-off errors. The presence of those errors can be abstracted into the general formula format shown above. For example f(x) = x + e, where x is the analytically correct, exact answer, e represents the error from round-off, and f(x) is the floating point number that you actually have on the machine at execution.

How to calculate percentage when old value is ZERO

I need the logic for the following situation. I am clueless in doing this.
Consider for January I have 10$ revenue and for February I have 20$ revenue.
My growth would be ((20-10)/10)*100% = 100%
If I have 0$ revenue for March.
Then my growth would be ((0-10)/10)*100 % =-100 %. Should I call this as negative percentage? (I know its deflation)
Going one step further,
If now I have 20$ revenue for April.
How can I calculate the growth now?, sure the following formula is wrong, ((20-0)/0)*100 %= ?????
My basic questions are
Is there a better solution to find growth rate, other than the one above?
If I use the aforementioned formula, should I take some value as reference? or this is wrong also?
This is most definitely a programming problem. The problem is that it cannot be programmed, per se. When P is actually zero then the concept of percentage change has no meaning. Zero to anything cannot be expressed as a rate as it is outside the definition boundary of rate. Going from 'not being' into 'being' is not a change of being, it is instead creation of being.
If you're required to show growth as a percentage it's customary to display [NaN] or something similar in these cases. A growth rate, on the other hand, would be reported in this case as $/month. So in your example for April the growth rate would be calculated as ((20-0)/1.
In any event, determining the correct method for reporting this special case is a user decision. Is it covered in your user requirements?
There is no rate of growth from 0 to any other number. That is to say, there is no percentage of increase from zero to greater than zero and there is no percentage of decrease from zero to less than zero (a negative number). What you have to decide is what to put as an output when this situation happens. Here are two possibilities I am comfortable with:
Any time you have to show a rate of increase from zero, output the infinity symbol (∞). That's Alt + 236 on your number pad, in case you're wondering. You could also use negative infinity (-∞) for a negative growth rate from zero.
Output a statement such as "[Increase/Decrease] From Zero" or something along those lines.
Unfortunately, if you need the growth rate for further calculations, the above options will not work, but, on the other hand, any number would give your following calculations incorrect data any way so the point is moot. You'd need to update your following calculations to account for this eventuality.
As an aside, the ((New-Old)/Old) function will not work when your new and old values are both zero. You should create an initial check to see if both values are zero and, if they are, output zero percent as the growth rate.
How to deal with Zeros when calculating percentage changes is the researcher's call and requires some domain expertise. If the researcher believes that it would not be distorting the data, s/he may simply add a very small constant to all values to get rid of all zeros. In financial series, when dealing with trading volume, for example, we may not want to do this because trading volume = 0 just means that: the asset did not trade at all. The meaning of volume = 0 may be very different from volume = 0.00000000001.
This is my preferred strategy in cases whereby I can not logically add a small constant to all values. Consider the percentage change formula ((New-Old)/Old) *100. If New = 0, then percentage change would be -100%. This number indeed makes financial sense as long as it is the minimum percentage change in the series (This is indeed guaranteed to be the minimum percentage change in the series). Why? Because it shows that trading volume experiences maximum possible decrease, which is going from any number to 0, -100%. So, I'll be fine with this value being in my percentage change series. If I normalize that series, then even better since this (possibly) relatively big number in absolute value will be analyzed on the same scale as other variables are. Now, what if the Old value = 0. That's a trickier case. Percentage change due to going from 0 to 1 will be equal to that due to going from 0 to a million: infinity. The fact that we call both "infinity" percentage change is problematic. In this case, I would set the infinities equal to np.nan and interpolate them.
The following graph shows what I discussed above. Starting from series 1, we get series 4, which is ready to be analyzed, with no Inf or NaNs.
One more thing: a lot of the time, the reason for calculating percentage change is to stationarize the data. So, if your original series contains zero and you wish to convert it to percentage change to achieve stationarity, first make sure it is not already stationary. Because if it is, you don't have to calculate percentage change. The point is that series that take the value of 0 a lot (the problem OP has) are very likely to be already stationary, for example the volume series I considered above. Imagine a series oscillating above and below zero, thus hitting 0 at times. Such a series is very likely already stationary.
There are a couple of things to consider.
If your growth is 0 for that month, it'd be 0 in change from 0. So it is meaningful in that sense. You could adjust by adding a small number, so it'd be change from 0.1 to 0.1. Then change and percentage change would be 0 and 0%.
Then to think about case where you change from 0 to 20. Such practice would result in massive reporting issues. Depending on what small number you choose to add, eg if you use 0.1 or 0.001, your percentage change would be 100 fold difference. So there is a problem with such practice.
It is possible however if you have a change from 1 to 20, then the %change would be 19/1=1900%. Here the % change doesn't make too much sense when you start off so low, it becomes very sensitive to any change and may skew your results if other data points are on different scale.
So it is important to understand your data, and in this case, how frequent you encounter 0s and extreme numbers in your data.
You can add 1 to each
example
New = 5;
old = 0;
(1+new) - (old+1) / (old +1)
5/ 1 * 100 ==> 500%
It should be (new minus old)/mod avg of old and new
With a special case when both val are zeros
When both values are zero, then the change is zero.
If one of the values is zero, it's infinite (ambiguous), but I would set it to 100%.
Here is a C++ code (where v1 is the previous value (old), and v2 is new):
double result = 0;
if (v1 != 0 && v2 != 0) {
// If values are non-zero, use the standard formula.
result = (v2 / v1) - 1;
} else if (v1 == 0 || v2 == 0) {
// Change is zero when both values are zeros, otherwise it's 100%.
result = v1 == 0 && v2 == 0 ? 0 : 1;
}
result = v2 > v1 ? abs(result) : -abs(result);
// Note: To have format in hundreds, multiply the result by 100.
function percentChange(initialValue, finalValue) {
if (initialValue == 0 && finalValue == 0) {
return 0
} else if (initialValue != 0 && finalValue != 0) {
return (finalValue / initialValue) - 1
} else {
return finalValue > initialValue ? 1 : -1
}
}
It should be float('inf') * new_value.
That would be make it better when comparison as comparison with float('NaN') will always return False except != as it is doesn't match with any value.
use below code, as this is 100% growth rate in case of 0 to any number :
IFERROR((NEW-OLD)/OLD,100%)

Can someone explain how probabilistic counting works?

Specifically around log log counting approach.
I'll try and clarify the use of probabilistic counters although note that I'm no expert on this matter.
The aim is to count to very very large numbers using only a little space to store the counter (e.g. using a 32 bits integer).
Morris came up with the idea to maintain a "log count", so instead of counting n, the counter holds log₂(n). In other words, given a value c of the counter, the real count represented by the counter is 2ᶜ.
As logs are not generally of integer value, the problem becomes when the c counter should be incremented, as we can only do so in steps of 1.
The idea here is to use a "probabilistic counter", so for each call to a method Increment on our counter, we update the actual counter value with a probability p. This is useful as it can be shown that the expected value represented by the counter value c with probabilistic updates is in fact n. In other words, on average the value represented by our counter after n calls to Increment is in fact n (but at any one point in time our counter is probably has an error)! We are trading accuracy for the ability to count up to very large numbers with little storage space (e.g. a single register).
One scheme to achieve this, as described by Morris, is to have a counter value c represent the actual count 2ᶜ (i.e. the counter holds the log₂ of the actual count). We update this counter with probability 1/2ᶜ where c is the current value of the counter.
Note that choosing this "base" of 2 means that our actual counts are always multiples of 2 (hence the term "order of magnitude estimate"). It is also possible to choose other b > 1 (typically such that b < 2) so that the error is smaller at the cost of being able to count smaller maximum numbers.
The log log comes into play because in base-2 a number x needs log₂ bits to be represented.
There are in fact many other schemes to approximate counting, and if you are in need of such a scheme you should probably research which one makes sense for your application.
References:
See Philippe Flajolet for a proof on the average value represented by the counter, or a much simpler treatment in the solutions to a problem 5-1 in the book "Introduction to Algorithms". The paper by Morris is usually behind paywalls, I could not find a free version to post here.
its not exactly for the log counting approach but i think it can help you,
using Morris' algorithm, the counter represents an "order of magnitude estimate" of the actual count.The approximation is mathematically unbiased.
To increment the counter, a pseudo-random event is used, such that the incrementing is a probabilistic event. To save space, only the exponent is kept. For example, in base 2, the counter can estimate the count to be 1, 2, 4, 8, 16, 32, and all of the powers of two. The memory requirement is simply to hold the exponent.
As an example, to increment from 4 to 8, a pseudo-random number would be generated such that a probability of .25 generates a positive change in the counter. Otherwise, the counter remains at 4. from wiki

When iterating through a set of numbers, will time increase at a constant exponential rate

Hello good people of stackoverflow, this is a conceptual question and could possibly belong in math.stackexchange.com, however since this relates to the processing speed of a CPU, I put it in here.
Anyways, my question is pretty simple. I have to calculate the sum of the cubes of 3 numbers in a range of numbers. That sounds confusing to me, so let me give an example.
I have a range of numbers, (0, 100), and a list of each numbers cube. I have to calculate each and every combination of 3 numbers in this set. For example, 0 + 0 + 0, 1 + 0 + 0, ... 98^3 + 99^3 + 100^3. That may make sense, I'm not sure if I explained it well enough.
So anyways, after all the sets are computed and checked against a list of numbers to see if the sum matches with any of those, the program moves on to the next set, (100, 200). This set needs to compute everything from 100-200 + 0-200 + 0-200. Than (200, 300) will need to do 200 - 300 + 0 - 300 + 0 - 300 and so on.
So, my question is, depending on the numbers given to a CPU to add, will the time taken increase due to size? And, will the time it takes for each set exponentially increase at a predictable rate or will it be exponential, however not constant.
The time to add two numbers is logarithmic with the magnitude of the numbers, or linear with the size (length) of the numbers.
For a 32-bit computer, numbers up to 2^32 will take 1 unit of time to add, numbers up to 2^64 will take 2 units, etc.
As I understand the question you have roughly 100*100*100 combinations for the first set (let's ignore that addition is commutative). For the next set you have 100*200*200, and for the third you have 100*300*300. So it looks like you have an O(n^2) process going on there. So if you want to calculate twice as many sets, it will take you four times as long. If you want to calculate thrice as many, it's going to take nine times as long. This is not exponential (such as 2^n), but usually referred to as quadratic.
It depends on how long "and so on" lasts. As long as you maximum number, cubed, fits in your longest integer type, no. It always takes just one instruction to add, so it's constant time.
Now, if you assume an arbitrary precision machine, like say writing these numbers on the tape of a turing machine in decimal symbols, then adding will take a longer time. In that case, consider how long it would take? In other words, think about how the length of a string of decimal symbols grows to represent a number n. It will take time at least proportional to that length.

Resources