How do you get rid of Hz when calculating MIPS? - math

I'm learning computer structure.
I have a question about MIPS, one of the ways to calculate CPU execution time.
The MIPS formula is as follows.
And if the clock rate is 4 GHz and the CPI is 1.
I think MIPS is 4,000hz.
Because It's 4 * 10^9 * Hz / 1 * 10^6.
I don't know if it's right to leave units of Hz.

Hz is 1/s. MIPS is actually "mega instruction / s". To be clear, "Per" is the slash for division: Mega Instructions / Second
4GHz is 4G s⁻¹. Divide that by 1 cycle per instruction... but cycle is period, which is inverse of frequency.
It's not 4000Hz MIPS because the MIPS means "Per second". You wrote 4000 million instruction 1/s 1/s.
You eat the Hz because it's part of the name you are labeling it with.

For any quantity, it's important to know what units it's in. As well as a scale factor (like a parsec is many times longer than an angstrom), units have dimensions, and this is fundamental (at least for physical quantities like time; it can get less obvious when you're counting abstract things).
Those example units are both units of length so they have the same dimensions; it's physically meaningful to add or subtract two lengths, or if we divide them then length cancels out and we have a pure ratio. (Obviously we have to take care of the scale factors, because 1 parsec / 1 angstrom isn't 1, it's 3.0856776e+26.) That is in fact why we can say a parsec is longer than an angstrom, but we can't say it's longer than a second. (It's longer than a light-second, but that's not the only possible speed that can relate time and distance.)
1 m/s is not the same thing as 1 kg, or as dimensionless 1.
Time (seconds) is a dimension, and we can treat instructions-executed as another dimension. (I'll call it I since there isn't a standard SI unit for it, AFAIK, and one could argue it's not a real physical dimension. That doesn't stop this kind of dimensional analysis from being useful, though.)
(An example of a standard count-based unit is the mole in chemistry, a count of particles. It's an SI base unit.)
Counts of clock cycles can be treated as another dimension, in which case clock frequency is cycles / sec rather than just s-1. (Seconds, s, are the base SI unit of time.) If we want to make sure we're correctly cancelling it out in both sides, that's a useful approach, especially when we have quantities like cycles/instruction (CPI). Thus cycle time is s/c, seconds per cycle.
Hz has dimensions of s-1, so if it's something per second we should not use Hz, if something isn't dimensionless. (Clock frequencies normally are given in Hz, because "cycles" aren't a real unit in physics. That's something we're introducing to make sure everything cancels properly).
MIPS has dimensions of instructions / time (I / s), so the factors that contribute to it must cancel out any cycle counts. And we're not calling it Hz because we're considering "instructions" as a real unit, thus 4000 MIPS not 4000 MHz. (And MIPS is itself a unit so it's definitely not 4000 Hz MIPS; if it made sense to combine units that way, that would be dimensions of I/s2, which would be an acceleration not a speed.).
From your list of formulas, leaving out the factor of 10^6 (that's the M in MIPS, just a metric prefix in front of Instructions Per Sec, I/s)
instructions / total time obviously works without needing any cancelling.
I / (c * s / c) = I / s after cancelling cycles in the denominator
(I * c/s) / (I * c/I) cancel the Instructions in the denominator:
(I * c/s) / c cancel the cycles:
(I * 1/s) / 1 = I/s
(c/s) / (c/I) cancel cycles:
(1/s) / (1/I) apply 1/(1/I) = I reciprocal of reciprocal
(1/s) * I = I / s
All of these have dimensions of Instructions / Seconds, i.e. I/S or IPS. With a scale factor of 106, that's MIPS.
BTW, this is called "dimensional analysis", and in physics (and other sciences) it's a handy tool to see if a formula is sane, because both sides must have the same dimensions.
e.g. if you're trying to remember how position (or distance-travelled) of an accelerating object works, d = 1/2 * a * t^2 works because acceleration is distance / time / time (e.g. m/s^2), and time-squared cancels out the s^-2 leaving just distance. If you mis-remembered something like 1/2 a^2 * t, you can immediate see that's wrong because you'd have dimensions of m / s^4 * s = m / s^3 which is not a unit of distance.
(The factor of 1/2 is not something you can check with dimensional analysis; you only get those constant factors like 1/2, pi, e, or whatever from doing the full math, e.g. taking the derivative or integral, or making geometric arguments about linear plots of velocity vs. time.)

Related

Seeding square roots on FPGA in VHDL for Fixed Point

I'm attempting to create a fixed-point square root function for a Xilinx FPGA (hence real types are out, and David Bishops ieee_proposed library is also unsupported for XST synthesis).
I've settled on a Newton-Raphson method to calculate the reciprocal square root (as it involves fewer divisions).
One of the remaining dilemmas I have is how to generate the initial seed. I looked at the Fast Inverse Square Root, but it only appears to work for floating point arithmetic.
My best thoughts at the moment are, to take the length of the input value (ie. find the index of the most significant, non-zero bit), halve it crudely and use that as the power for a seed.
I wrote a short test script to quickly check the accuracy (its in Matlab but that's just so I could plot a graph...)
x = 1:2^24;
gen_result = zeros(1,length(x));
seed_vals = zeros(1,length(x));
for i = 1:length(x)
result = 2^-ceil(log2(x(i))/2); %effectively creates seed value from top bit index
seed_vals(i) = 1/result; %Store seed value
for j = 1:6
result = result*(1.5-0.5*x(i)*result^2); %reciprocal root
end
gen_result(i) = 1/result; %single division at the end
end
And unsurprisingly, the seed becomes wildly inaccurate each time a number increases in size, and this increases as the magnitude of the input increases. As a graph this can be seen as:
The red line is the value of the seed, and as can be seen, is increasing increasing in powers of 2.
My question very simple: Are there any other simple methods I could use to generate a seed value for fixed point square root values in VHDL, ideally which don't cause ever increasing amounts of inaccuracy (and hence require more iterations each time the input increases in size).
Any other incidental advise on how to approach finding fixed points square roots in VHDL would be gratefully received!
I realize this is an old question but I did end up here and this was kind of useful so I want to add my bit.
Assuming your Xilinx chip has an embedded multiplier, you could consider this approach to help get a better starting seed. The basic premise is to convert the input integer to fixed point with all fraction bits, and then use the embedded multiplier to scale half of your initial seed value by 0.X (which in hindsight is probably what people mean when they say "normalize to the region [0.5..1)", now that I think about it). It's basically piecewise linear interpolation of your existing seed method. The steps below should translate relatively easily to RTL, as they're just bit-shifts, adds, and one unsigned multiply.
1) Begin with your existing seed value (e.g. for x=9e6, you would generate s=4096 as the seed for your first guess with your "crude halving" method)
2) Right-shift the existing seed value by 1 to get the previous seed value (s_half = s >> 1 = 2048)
3) Left-shift the input until the most significant bit is a 1. In the event you are sqrting 32-bit ints, x_scale would then be 2304000000 = 0x89544000
4) Slice the upper e.g. 18 bits off of x_scale and multiply by an 18-bit version of s_half (I suggest 18 because I happen to know some Xilinx chips have embedded 18x18 multipliers). For this case, the result, x_scale(31 downto 14) = 140625 = 0x22551.
At least, that's what the multiplier thinks - we're going to use fixed point so that it's actually 0b0.100010010101010001 = 0.53644 instead of 140625.
The result of this multiplication will be s_scale = s_half * x_scale(31 downto 14) = 2048 * 140625 = 288000000, but this output is in 18.18 format (18 integer bits, 18 fraction bits). Take the upper 18 bits, and you get s_scale(35 downto 18) = 1098
5) Add the upper 18 bits of s_scale to s_half to get your improved seed, in this case s_improved = 1098+2048 = 3146
Now you can do a few iterations of Newton-Raphson with this seed. For x=9e6, your crude halving approach would give an initial seed of 4096, the fixed-point scale outlined above gives you 3146, and the actual sqrt(9e6) is 3000. This value is half-way between your seed steps, and my napkin math suggests it saved about 3 iterations of Newton-Raphson

How to determine how long a recursive fibonacci algorithm will take for larger values if I have the time for the smaller values?

I have used the time library and timed how long the recursive algorithm takes to calculate the fib numbers up to 50. Give those number, is there a formula I can use to determine how long it would have potentially taken to calculate fib(100)?
Times for smaller values:
Fib(40): 0.316 sec
Fib(80): 2.3 years
Fib(100): ???
This depends very much on the algorithm in use. The direct computation takes constant time. The recursive computation without memoization is exponential, with a base of phi. Add memoization to this, and it drops to logarithmic time.
The only one that could fit your data is the exponential time. Doing the basic math ...
(2.3 years / 0.316 sec) ** (1.0/40)
gives us
base = 1.6181589...
Gee, look at that! Less than one part in 10^4 more than phi!
Let t(n) be the time to compute Fib(n).
We can support the hypothesis that
t(n) = phi * t(n-1)
Therefore,
t(100) = phi^(100-80) * t(80)
I trust you can finish from here.

How does one calculate the rate of change (derivative) of streaming data?

I have a stream of data that trends over time. How do I determine the rate of change using C#?
It's been a long time since calculus class, but now is the first time I actually need it (in 15 years). Now when I search for the term 'derivatives' I get financial stuff, and other math things I don't think I really need.
Mind pointing me in the right direction?
If you want something more sophisticated that smooths the data, you should look into a a digital filter algorithm. It's not hard to implement if you can cut through the engineering jargon. The classic method is Savitzky-Golay
If you have the last n samples stored in an array y and each sample is equally spaced in time, then you can calculate the derivative using something like this:
deriv = 0
coefficient = (1,-8,0,8,-1)
N = 5 # points
h = 1 # second
for i range(0,N):
deriv += y[i] * coefficient[i]
deriv /= (12 * h)
This example happens to be a N=5 filter of "3/4 (cubic/quartic)" filter. The bigger N, the more points it is averaging and the smoother it will be, but also the latency will be higher. You'll have to wait N/2 points to get the derivative at time "now".
For more coefficients, look here at the Appendix
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
You need both the data value V and the corresponding time T, at least for the latest data point and the one before that. The rate of change can then be approximated with Eulers backward formula, which translates into
dvdt = (V_now - V_a_moment_ago) / (T_now - T_a_moment_ago);
in C#.
Rate of change is calculated as follows
Calculate a delta such as "price minus - price 20 days ago"
Calculate rate of change such as "delta / price 99 days ago"
Total rate of change, i.e. (new_value - original_value)/time?

How to do hypot2(x,y) calculation when numbers can overflow

I'd like to do a hypot2 calculation on a 16-bit processor.
The standard formula is c = sqrt((a * a) + (b * b)). The problem with this is with large inputs it overflows. E.g. 200 and 250, multiply 200 * 200 to get 90,000 which is higher than the max signed value of 32,767, so it overflows, as does b, the numbers are added and the result may as well be useless; it might even signal an error condition because of a negative sqrt.
In my case, I'm dealing with 32-bit numbers, but 32-bit multiply on my processor is very fast, about 4 cycles. I'm using a dsPIC microcontroller. I'd rather not have to multiply with 64-bit numbers because that's wasting precious memory and undoubtedly will be slower. Additionally I only have sqrt for 32-bit numbers, so 64-bit numbers would require another function. So how can I compute a hypot when the values may be large?
Please note I can only really use integer math for this. Using anything like floating point math incurs a speed hit which I'd rather avoid. My processor has a fast integer/fixed point atan2 routine, about 130 cycles; could I use this to compute the hypotenuse length?
Depending on how much accuracy you need you may be able to avoid the squares and the square root operation. There is a section on this topic in Understanding Digital Signal Processing by Rick Lyons (section 10.2, "High-Speed Vector-Magnitude Approximation", starting at page 400 in my edition).
The approximation is essentially:
magnitude = alpha * min + beta * max
where max and min are the maximum and minimum absolute values of the real and imaginary components, and alpha and beta are two constants which are chosen to give a reasonable error distribution over the range of interest. These constants can be represented as fractions with power of 2 divisors to keep the arithemtic simple/efficient. In the book he suggests alpha = 15/16, beta = 15/32, and you can then simplify the formula to:
magnitude = (15 / 16) * (max + min / 2)
which might be implemented as follows using integer operations:
magnitude = 15 * (max + min / 2) / 16
and of course we can use shifts for the divides:
magnitude = (15 * (max + (min >> 1))) >> 4
Error is +/- 5% over a quadrant.
More information on this technique here: http://www.dspguru.com/dsp/tricks/magnitude-estimator
This is taken verbatim from this #John D. Cook blog post, hence CW:
Here’s how to compute sqrt(x*x + y*y)
without risking overflow.
max = maximum(|x|, |y|)
min = minimum(|x|, |y|)
r = min / max
return max*sqrt(1 + r*r)
If #John D. Cook comes along and posts this you should give him the accept :)
Since you essentially can't do any multiplications without overflow you're likely going to lose some precision.
To get the numbers into an acceptable range, pull out some factor x and use
c = x*sqrt( (a/x)*(a/x) + (b/x)*(b/x) )
If x is a common factor, you won't lose precision, but if it's not, you will lose precision.
Update:
Even better, given that you can do some mild work with 64-bit numbers, with just one 64-bit addition, you could do the rest of this problem in 32-bits with only a tiny loss of accuracy. To do this: do the two 32-bit multiplications to give you two 64-bit numbers, add these, and then bit shift as needed to get the sum back down to 32-bits before taking the square root. If you always bit shift by 2 bits, then just multiply the final result by 2^(half the number of bit shifts), based on the rule above. The truncation should only cause a very small loss of accuracy, no more than 2^31, or 0.00000005% error.
Aniko and John, it seems to me that you haven't addressed the OP's problem. If a and b are integers, then a*a + b*b is likely to overflow, because integer operations are being performed. The obvious solution is to convert a and b to floating-point values before computing a*a + b*b. But the OP hasn't let us know what language we should use, so we're a bit stuck.
The standard formula is c = sqrt((a * a) + (b * b)). The problem with this is with large >inputs it overflows.
The solution for overflows (aside from throwing an error) is to saturate your intermediate calculations.
Calculate C = a*a + b*b. If a and b are signed 16-bit numbers, you will never have an overflow. If they are unsigned numbers, you'll need to right-shift the inputs first to get the sum to fit in a 32-bit number.
If C > (MAX_RADIUS)^2, return MAX_RADIUS, where MAX_RADIUS is the maximum value you can tolerate before encounting an overflow.
Otherwise, use either sqrt() or the CORDIC algorithm, which avoids the cost of square roots in favor of loop iteration + adds + shifts, to retrieve the amplitude of the (a,b) vector.
If you can constrain a and b to be at most 7 bits, you won't get any overflow. You can use a count-leading-zeros instruction to figure out how many bits to throw away.
Assume a>=b.
int bits = 16 - count_leading_zeros(a);
if (bits > 7) {
a >>= bits - 7;
b >>= bits - 7;
}
c = sqrt(a*a + b*b);
if (bits > 7) {
c <<= bits - 7;
}
Lots of processors have this instruction nowadays, and if not, you can use other fast techniques.
Although this won't give you the exact answer, it will be very close (at most ~1% low).
Do you need full precision? If you don't, you can increase your range a little bit by discarding a few least significant bits and multiplying them in afterwards.
Can a and b be anything? How about a lookup table if you only have a few a and b that you need to calculate?
A simple solution to avoid overflow is to divide both a and b by a+b before squaring, and then multiply the square root by a+b. Or do the same with max(a,b).
You can do a little simple algebra to bring the results back into range.
sqrt((a * a) + (b * b))
= 2 * sqrt(((a * a) + (b * b)) / 4)
= 2 * sqrt((a * a) / 4 + (b * b) / 4)
= 2 * sqrt((a/2 * a/2) + (b/2 * b/2))

Random number generation without using bit operations

I'm writing a vertex shader at the moment, and I need some random numbers. Vertex shader hardware doesn't have logical/bit operations, so I cannot implement any of the standard random number generators.
Is it possible to make a random number generator using only standard arithmetic? the randomness doesn't have to be particularly good!
If you don't mind crappy randomness, a classic method is
x[n+1] = (x[n] * x[n] + C) mod N
where C and N are constants, C != 0 and C != -2, and N is prime. This is a typical pseudorandom generator for Pollard Rho factoring. Try C = 1 and N = 8051, those work ok.
Vertex shaders sometimes have built-in noise generators for you to use, such as cg's noise() function.
Use a linear congruential generator:
X_(n+1) = (a * X_n + c) mod m
Those aren't that strong, but at least they are well known and can have long periods. The Wikipedia page also has good recommendations:
The period of a general LCG is at most
m, and for some choices of a much less
than that. The LCG will have a full
period if and only if:
1. c and m are relatively prime,
2. a - 1 is divisible by all prime factors of m,
3. a - 1 is a multiple of 4 if m is a multiple of 4
Believe it or not, I used newx = oldx * 5 + 1 (or a slight variation of it) in several videogames. The randomness is horrible--it's more of a scrambled sequence than a random generator. But sometimes that's all you need. If I recall correctly, it goes through all numbers before it repeats.
It has some terrible characteristics. It doesn't ever give you the same number twice in a row. A few of us did a bunch of tests on variations of it and we used some variations in other games.
We used it when there was no good modulo available to us. It's just a shift by two and two adds (or a multiply by 5 and one add). I would never use it nowadays for random numbers--I'd use an LCG--but maybe it would work OK for a shader where speed is crucial and your instruction set may be limited.

Resources