Combining compound interest and constant increase - math

I found out how to calculate compound interest as such, actually in my application it is a compound discount so charging interest as opposed to earning it:
def compoundDiscount(balance, interest, period):
return balance / (1 + interest) ** period
This allows me to calculate compound interest in a single "step", though there may be some logarithmic complexity in the exponentiation.
I can also discount a value by a constant amount:
def constantDiscount(balance, rate, period):
return balance - rate * period
So what I'm wondering is if there is a mathematical solution to combine the two, that also has logarithmic or better complexity? ie:
def combinedDiscount(balance, interest, rate, period):
return {magic here}
Thanks
I tried various naive ways of combining the results of the two functions but didn't find anything accurate except O(n) iteration where n is the number of periods.

Related

how to compute the global variance (square standard deviation) in a parallel application?

I have a parallel application in which I am computing in each node the variance of each partition of datapoint based on the calculated mean, but how can I compute the global variance (sum of all the variances)?
I thought that it would be a simple sum of the variances and divided by the number of nodes, but it is not giving me a close result...
The global variation is a sum.
You can compute parts of the sum in parallel trivially, and then add them together.
sum(x1...x100) = sum(x1...x50) + sum(x51...x100)
The same way, you can compute the global averages - compute the global sum, compute the sum of the object counts, divide (don't divide by the number of nodes; but by the total number of objects).
mean = sum/count
Once you have the mean, you can compute the sum of squared deviations using the distributed sum formula above (applied to (xi-mean)^2), then divide by count-1 to get the variance.
Do not use E[X^2] - (E[X])^2
While this formula "mean of square minus square of mean" is highly popular, it is numerically unstable when you are using floating point math. It's known as catastrophic cancellation.
Because the two values can be very close, you lose a lot of digits in precision when computing the difference. I've seen people get a negative variance this way...
With "big data", numerical problems gets worse...
Two ways to avoid these problems:
Use two passes. Computing the mean is stable, and gets you rid of the subtraction of the squares.
Use an online algorithm such as the one by Knuth and Welford, then use weighted sums to combine the per-partition means and variances. Details on Wikipedia In my experience, this often is slower; but it may be beneficial on Hadoop due to startup and IO costs.
You need to add the sums and sums of squares of each partition to get the global sum and sum of squares and then use them to calculate the global mean and variance.
UPDATE: E[X2] - E[X]2 and cancellation...
To figure out how important cancellation error is when calculating the standard deviation with
σ = √(E[X2] - E[X]2)
let us assume that we have both E[X2] and E[X]2 accurate to 12 significant decimal figures. This implies that σ2 has an error of order 10-12 × E[X2] or, if there has been significant cancellation, equivalently 10-12 × E[X]2 when σ will have an error of approximate order 10-6 × E[X]; one millionth the mean.
For many, if not most, statistical analyses this is negligable, in the sense that it falls within other sources of error (like measurement error), and so you can in good consciense simply set negative variances to zero before you take the square root.
If you really do care about deviations of this magnitude (and can show that it's a feature of the thing you are measuring and not, for example, an artifact of the method of measurement) then you can start worrying about cancellation. That said, the most likely explanation is that you have used an inappropriate scale for your data, such as measuring daily temperatures in Kelvin rather than Celcius!

Rounding overly precise fractions to human-friendly precision

How can I round an excessively precise fraction to a less precise format that is more humanly readable?
I'm working with JPEG EXIF exposure time data extracted by MS' Windows Imaging Component. WIC returns exposure times in fractional form with separate ints for numerator and denominator.
WIC usually works as expected, but with some JPEGs, WIC returns exposure times in millionths of a second, meaning that instead of reporting e.g. a 1/135 second exposure time, it reports an exposure time of 7391/1000000 seconds. The difference between 1/135 and 7391/1000000 is quite small but the latter is not intuitive to most users. As such, I'd like to round overly precise exposure times to the nearest standard exposure times used in photography.
Is there a better way to do this other than using a lookup table of known-reasonable exposure times and finding the nearest match?
You can compute the continued fraction expansion of the large fraction. Then take one of the first convergents as your approximate fraction.
In your case, you get
7391/1000000 = [ 0; 135, 3, 2, ...]
so the first convergent is 1/135=0.0074074..., the next
1/(135+1/3) = 3/406 = 0.00738916256...
and the third
1/(135+1/(3+1/2)) = 1/(135+2/7) = 7/947 = 0.00739176346...
To compute the (first) coefficients of a continuous fraction development, you start with xk=x0. Then iteratively apply the procedure
Separate xk=n+r into integer n and fractional part r.
The integer is the next coefficient ak, with the inverse of the fractional part you start this procedure anew, xk = 1/r
Applied to the given number, this produces exactly the start of the sequence as above Then reconstruct the rational expressions, continue until the inverse of the square of the denominator is smaller than a given tolerance.
Try this:
human_readable_denominator = int(0.5 + 1 / precise_exposure_time)
With the example you gave:
human_readable_denominator = int(0.5 + 1 / (7391/1000000))
= 135
This works well for exposure times less than 1/2 second. For longer exposure times, converting to a 1/X format doesn't make sense.
Phil
Take a look at approxRational in Haskell's Data.Ratio. You give it a number and an epsilon value, and it gives the nicest rational number within epsilon of that number. I imagine other languages have similar library functions, or you can translate the Haskell source for approxRational.

Probability optimization

I have the data set
d<-data.frame(id=1:100, pr.a=runif(100,min=0, max=0.40))
d$pr.b=d$pr.a+runif(100,min=0, max=0.1))
d$pr.c=d$pr.b+runif(100,min=0, max=0.1)
pr.a < pr.b < pr.c are the probability of a success of a binomial trial for tests A, B, C on individuals (id's )
additionally
cost.a<-80; cost.b=200; cost.c=600;
The tests A,B,C can be performed multiple times in each subject. So that for example for if IDx has a pr.a =0.2, then if I do this test 2 times i would expect a success probability of 1-pbinom(0,2,0.2)=0.36 at a cost of 2*cost.a =160
For each modality A, B, C, in all IDs I would like to find the distributions of the costs that would be required for a given target success rate (lets say target=0.9)
At first i would like to see the distributions of the costs if only one test type (only A's or only C's) is applied in each subject (albeit it can be performed multiple times on the same subject).
Additionally I would like to find if combination of types can minimize the cost for a target success rate.
This seems to me as an optimization problem. I have no experience on optimization. Any ideas please?
I guess you could characterize this as an optimization problem, but only a very simple one. You are simply trying to maximize the probability per unit cost. To calculate this, simply divide the probability by the cost:
d$pr.a / cost.a # The probability per unit cost for A
d$pr.b / cost.b
d$pr.c / cost.c
Pick the maximum for each id, and you will get the 'best' test for each id. To calculate the expected cost for a given probability just divide the target by the probability and multiply by cost.
target=0.9
(target / d$pr.ca) * cost

How does one calculate the rate of change (derivative) of streaming data?

I have a stream of data that trends over time. How do I determine the rate of change using C#?
It's been a long time since calculus class, but now is the first time I actually need it (in 15 years). Now when I search for the term 'derivatives' I get financial stuff, and other math things I don't think I really need.
Mind pointing me in the right direction?
If you want something more sophisticated that smooths the data, you should look into a a digital filter algorithm. It's not hard to implement if you can cut through the engineering jargon. The classic method is Savitzky-Golay
If you have the last n samples stored in an array y and each sample is equally spaced in time, then you can calculate the derivative using something like this:
deriv = 0
coefficient = (1,-8,0,8,-1)
N = 5 # points
h = 1 # second
for i range(0,N):
deriv += y[i] * coefficient[i]
deriv /= (12 * h)
This example happens to be a N=5 filter of "3/4 (cubic/quartic)" filter. The bigger N, the more points it is averaging and the smoother it will be, but also the latency will be higher. You'll have to wait N/2 points to get the derivative at time "now".
For more coefficients, look here at the Appendix
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
You need both the data value V and the corresponding time T, at least for the latest data point and the one before that. The rate of change can then be approximated with Eulers backward formula, which translates into
dvdt = (V_now - V_a_moment_ago) / (T_now - T_a_moment_ago);
in C#.
Rate of change is calculated as follows
Calculate a delta such as "price minus - price 20 days ago"
Calculate rate of change such as "delta / price 99 days ago"
Total rate of change, i.e. (new_value - original_value)/time?

How can I measure volatility?

I am trying to determine the volatility of a rank.
More specifically, the rank can be from 1 to 16 over X data points (the number of data points varies with a maximum of 30).
I'd like to be able to measure this volatility and then map it to a percentage somehow.
I'm not a math geek so please don't spit out complex formulas at me :)
I just want to code this in the simplest manner possible.
I think the easiest first pass would be Standard Deviation over X data points.
I think that Standard Deviation is what you're looking for. There are some formulas to deal with, but it's not hard to calculate.
Given that you have a small sample set (you say a maximum of 30 data points) and that the standard deviation is easily affected by outliers, I would suggest using the interquartile range as a measure of volatility. It is a trivial calculation and would give a meaningful representation of the data spread over your small sample set.
If you want something really simple you could take the average of the absolute differences between successive ranks as volatility. This has the added bonus of being recursive. Us this for initialisation:
double sum=0;
for (int i=1; i<N; i++)
{
sum += abs(ranks[i]-ranks[i-1]);
}
double volatility = sum/N;
Then for updating the volatility if a new rank at time N+1 is available you introduce the parameter K where K determines the speed with which your volatility measurement adapts to changes in volatility. Higher K means slower adaption, so K can be though of as a "decay time" or somesuch:
double K=14 //higher = slower change in volatility over time.
double newvolatility;
newvolatility = (oldvolatility * (K-1) + abs(rank[N+1] - rank[N]))/K;
This is also known as a moving average (of the absolute differences in ranks in this case).

Resources