In erlang:
cost(I, Miners) ->
BasePrice = lists:nth(I, prices()),
Owned = lists:nth(I, Miners),
Rate = increaseRate(I) / 100,
Multiplier = math:pow((1 + Rate), Owned),
floor(BasePrice * Multiplier).
for example, a base price of 8000, with an increase rate of 7, and I own 0
the price of the first one I expect to be: 8000
when buying my second one, with an increase rate of 7, and I own 1
the price of the second one I expect to be:
Multiplier = 1.07
8000 * 1.07 =
8560
This all works fine. Now I have to implement this in Solidity, which doesn't do decimal math very well. It auto rounds down such that 3/2 == 1 in Solidity.
I want to recreate my cost function in Solidity.
function cost(uint _minerIndex, uint _owned) public view returns (uint) {
uint basePrice = 8000;
uint increaseRate = 7;
return basePrice * ((1 + increaseRate / 100) ** _owned);
}
increaseRate / 100 will always return 0 if increaseRate is < 100.
How do I achieve this same effect?
From the documentation:
"Fixed point numbers are not fully supported by Solidity yet. They can be declared, but cannot be assigned to or from."
a simple solution is
(basePrice * ((100+increaseRate)** _owned))/(100 ** _owned)
but it may fail also because of arithmetic overflow, depending on your numbers and the MaxInt supported by solidity.
Related
Is it possible to limit a value in a given range, between min and max, using only arithmetic? That is, + - x / and %?
I am not able to use functions such as min, max nor IF-statements.
Let's assume I have a range of [1850, 1880], for any values < 1850, it should display 1850. For values > 1880, 1880 should be displayed. It would also be acceptable if only 1850 was displayed outside the range.
I tried:
x = (((x - xmax) % (xmax - xmin)) + (xmax - xmin)) % (xmax - xmin) + xmin
but it gives different values in the middle of the range for values lower than xmin.
If you know the size of the integer type, you can extract its sign bit (assuming two's complement) using integer division:
// Example in C
int sign_bit(int s)
{
// cast to unsigned (important)
unsigned u = (unsigned)s;
// number of bits in int
// if your integer size is fixed, this is just a constant
static const unsigned b = sizeof(int) * 8;
// pow(2, b - 1)
// again, a constant which can be pre-computed
static const unsigned p = 1 << (b - 1);
// use integer division to get top bit
return (int)(u / p);
}
This returns 1 if s < 0 and 0 otherwise; it can be used to calculate the absolute value:
int abs_arith(int v)
{
// sign bit
int b = sign_bit(v);
// actual sign (+1 / -1)
int s = 1 - 2 * b;
// sign(v) * v = abs(v)
return s * v;
}
The desired function looks like this:
It is useful to first shift the minimum to zero:
This function form can be computed as a sum of the two shifted absolute value functions below:
However the resultant function is scaled by a factor of 2; shifting to zero helps here because we only need to divide by 2, and shift back to the original minimum:
// Example in C
int clamp_minmax(int val, int min, int max)
{
// range length
int range = max - min;
// shift minimum to zero
val = val - min;
// blue function
int blue = abs_arith(val);
// green function
int green = range - abs_arith(val - range);
// add and divide by 2
val = (blue + green) / 2;
// shift to original minimum
return val + min;
}
This solution, although satisfies the requirements of the problem, is limited to signed integer types (and languages which allow integer overflow - I'm unsure of how this could be overcome in e.g. Java).
I found this while messing around in... excel. It only works for strictly positive integers. Although this is not more restrictive as the answer by meowgoesthedog because he also effectivly halves the integer space by dividing by two at the end. It doesn't use mod.
//A = 1 if x <= min
//A = 0 if x >= min
A = 1-(min-min/x)/min
//B = 0 if x <= max
//B = 1 if x > max
B = (max-max/x)/max
x = A*min + (1-A)*(1-B)*x + B*max
I found this solution in Python:
A = -1 # Minimum value
B = +1 # Maximum value
x = min(max(x, A), B)
So I'm trying to animate a fake heartbeat for my Android wear watchface. I have an API that grabs the heartrate in BPM and the current millisecond now I'm trying to use an equation to make an image "beat" to the BPM. Here's the psuedocode:
IF (Millis / HeartRate) % (1000 / HeartRate) <= 1)
Opacity = 100;
ELSE
Opacity = 75;
ENDIF
I'm really not sure if I calculated it properly. I don't think the image is properly flashing at the correct rate. Any help with the math would be appreciatred!
A value in BPM is a frequency, rather than a period of time:
b BPM = b / 60s = b/60 * s^-1
The period of the oscillation is
T = 1/f = 60s / b = 60/b s
If we have a time in milliseconds, then we can work out the modulo vs the period:
remainderInSeconds = CurrentTimeInSeconds % T
= (CurrentTimeInMilliseconds * 1e-3) % T
= (CurrentTimeInMilliseconds * 1e-3) % (60/BeatsPerMinute)
fraction = remainderInSeconds / Period
= [(CurrentTimeInMilliseconds * 1e-3) % T] / T
= (CurrentTimeInMilliseconds * 1e-3 / T) % 1
= (CurrentTimeInMilliseconds * 1e-3 / (60/BeatsPerMinute)) % 1
= (CurrentTimeInMilliseconds * 1e-3 * BeatsPerMinute / 60)) % 1
= (CurrentTimeInMilliseconds * BeatsPerMinute / 60e3)) % 1
Then you can check whether the fraction is below your threshold; if you want the pulse to last a 20th of the period, then check if fraction < 1/20.
Alternatively just calculate the remainder in seconds, if you want the pulse to last a specific amount of time rather than a portion of the period.
I managed to compile a new code using a different variable from the watch API. This other variable is essentially a number between 0 and 359 which steps up at mere decimals per frame. (The variable is normally used for a smooth motion second hand).
I also decided to use a sine wave and RGB shaders instead of opacity. Here is the new code
Green = 0
Blue = 0
Red = 100 * math.sin(HeartRate * SecondsRotationSmooth / 60)
Using this particular variable isn't ideal, but it at least gives me a better looking code. If anyone wants to give a better answer please do!
This should be simple enough but the maths for this eludes me. I am looking to express this in C++ but some psuedo code will happily do, or just the maths really.
The function will be given a number of a container and it will return the number of items in that container. The number of items is based on their number and halves at certain number height.
The first halving is at number 43,200 and then every time after it is the gap number of containers between the previous halving plus 43,200
It may sounds confusing, it will look like the following.
1 to 43200 = 512
43201 to 86400 = 256
86401 to 129600 = 128
129601 to 172800 = 64
172801 to 216000 = 32
216001 to 259200 = 16
and so
So if a number up to 43,200 is given the result is 512, the number 130,000 will return 64. The value can be less than 1 taking up several decimal places.
global halvingInterval = 43200
global startingInventory = 512
def boxInventory(boxNumber):
currentInventory = startingInventory
while(boxNumber > halvingInterval):
currentInventory = currentInventory/2
boxNumber -= halvingInterval
return currentInventory
This code will take the box number. It will keep subtracting the halving interval until you get to the right inventory area, and then return the inventory when it is done.
N = (noitems + 1) / 43200;
L2 = log(512) / log(2);
answer = exp( log(2) * (1 + L2 - N) );
Is it possible to divide an unsigned integer by 10 by using pure bit shifts, addition, subtraction and maybe multiply? Using a processor with very limited resources and slow divide.
Editor's note: this is not actually what compilers do, and gives the wrong answer for large positive integers ending with 9, starting with div10(1073741829) = 107374183 not 107374182. It is exact for smaller inputs, though, which may be sufficient for some uses.
Compilers (including MSVC) do use fixed-point multiplicative inverses for constant divisors, but they use a different magic constant and shift on the high-half result to get an exact result for all possible inputs, matching what the C abstract machine requires. See Granlund & Montgomery's paper on the algorithm.
See Why does GCC use multiplication by a strange number in implementing integer division? for examples of the actual x86 asm gcc, clang, MSVC, ICC, and other modern compilers make.
This is a fast approximation that's inexact for large inputs
It's even faster than the exact division via multiply + right-shift that compilers use.
You can use the high half of a multiply result for divisions by small integral constants. Assume a 32-bit machine (code can be adjusted accordingly):
int32_t div10(int32_t dividend)
{
int64_t invDivisor = 0x1999999A;
return (int32_t) ((invDivisor * dividend) >> 32);
}
What's going here is that we're multiplying by a close approximation of 1/10 * 2^32 and then removing the 2^32. This approach can be adapted to different divisors and different bit widths.
This works great for the ia32 architecture, since its IMUL instruction will put the 64-bit product into edx:eax, and the edx value will be the wanted value. Viz (assuming dividend is passed in eax and quotient returned in eax)
div10 proc
mov edx,1999999Ah ; load 1/10 * 2^32
imul eax ; edx:eax = dividend / 10 * 2 ^32
mov eax,edx ; eax = dividend / 10
ret
endp
Even on a machine with a slow multiply instruction, this will be faster than a software or even hardware divide.
Though the answers given so far match the actual question, they do not match the title. So here's a solution heavily inspired by Hacker's Delight that really uses only bit shifts.
unsigned divu10(unsigned n) {
unsigned q, r;
q = (n >> 1) + (n >> 2);
q = q + (q >> 4);
q = q + (q >> 8);
q = q + (q >> 16);
q = q >> 3;
r = n - (((q << 2) + q) << 1);
return q + (r > 9);
}
I think that this is the best solution for architectures that lack a multiply instruction.
Of course you can if you can live with some loss in precision. If you know the value range of your input values you can come up with a bit shift and a multiplication which is exact.
Some examples how you can divide by 10, 60, ... like it is described in this blog to format time the fastest way possible.
temp = (ms * 205) >> 11; // 205/2048 is nearly the same as /10
to expand Alois's answer a bit, we can expand the suggested y = (x * 205) >> 11 for a few more multiples/shifts:
y = (ms * 1) >> 3 // first error 8
y = (ms * 2) >> 4 // 8
y = (ms * 4) >> 5 // 8
y = (ms * 7) >> 6 // 19
y = (ms * 13) >> 7 // 69
y = (ms * 26) >> 8 // 69
y = (ms * 52) >> 9 // 69
y = (ms * 103) >> 10 // 179
y = (ms * 205) >> 11 // 1029
y = (ms * 410) >> 12 // 1029
y = (ms * 820) >> 13 // 1029
y = (ms * 1639) >> 14 // 2739
y = (ms * 3277) >> 15 // 16389
y = (ms * 6554) >> 16 // 16389
y = (ms * 13108) >> 17 // 16389
y = (ms * 26215) >> 18 // 43699
y = (ms * 52429) >> 19 // 262149
y = (ms * 104858) >> 20 // 262149
y = (ms * 209716) >> 21 // 262149
y = (ms * 419431) >> 22 // 699059
y = (ms * 838861) >> 23 // 4194309
y = (ms * 1677722) >> 24 // 4194309
y = (ms * 3355444) >> 25 // 4194309
y = (ms * 6710887) >> 26 // 11184819
y = (ms * 13421773) >> 27 // 67108869
each line is a single, independent, calculation, and you'll see your first "error"/incorrect result at the value shown in the comment. you're generally better off taking the smallest shift for a given error value as this will minimise the extra bits needed to store the intermediate value in the calculation, e.g. (x * 13) >> 7 is "better" than (x * 52) >> 9 as it needs two less bits of overhead, while both start to give wrong answers above 68.
if you want to calculate more of these, the following (Python) code can be used:
def mul_from_shift(shift):
mid = 2**shift + 5.
return int(round(mid / 10.))
and I did the obvious thing for calculating when this approximation starts to go wrong with:
def first_err(mul, shift):
i = 1
while True:
y = (i * mul) >> shift
if y != i // 10:
return i
i += 1
(note that // is used for "integer" division, i.e. it truncates/rounds towards zero)
the reason for the "3/1" pattern in errors (i.e. 8 repeats 3 times followed by 9) seems to be due to the change in bases, i.e. log2(10) is ~3.32. if we plot the errors we get the following:
where the relative error is given by: mul_from_shift(shift) / (1<<shift) - 0.1
Considering Kuba Ober’s response, there is another one in the same vein.
It uses iterative approximation of the result, but I wouldn’t expect any surprising performances.
Let say we have to find x where x = v / 10.
We’ll use the inverse operation v = x * 10 because it has the nice property that when x = a + b, then x * 10 = a * 10 + b * 10.
Let use x as variable holding the best approximation of result so far. When the search ends, x Will hold the result. We’ll set each bit b of x from the most significant to the less significant, one by one, end compare (x + b) * 10 with v. If its smaller or equal to v, then the bit b is set in x. To test the next bit, we simply shift b one position to the right (divide by two).
We can avoid the multiplication by 10 by holding x * 10 and b * 10 in other variables.
This yields the following algorithm to divide v by 10.
uin16_t x = 0, x10 = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
uint16_t t = x10 + b10;
if (t <= v) {
x10 = t;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
Edit: to get the algorithm of Kuba Ober which avoids the need of variable x10 , we can subtract b10 from v and v10 instead. In this case x10 isn’t needed anymore. The algorithm becomes
uin16_t x = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
if (b10 <= v) {
v -= b10;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
The loop may be unwinded and the different values of b and b10 may be precomputed as constants.
On architectures that can only shift one place at a time, a series of explicit comparisons against decreasing powers of two multiplied by 10 might work better than the solution form hacker's delight. Assuming a 16 bit dividend:
uint16_t div10(uint16_t dividend) {
uint16_t quotient = 0;
#define div10_step(n) \
do { if (dividend >= (n*10)) { quotient += n; dividend -= n*10; } } while (0)
div10_step(0x1000);
div10_step(0x0800);
div10_step(0x0400);
div10_step(0x0200);
div10_step(0x0100);
div10_step(0x0080);
div10_step(0x0040);
div10_step(0x0020);
div10_step(0x0010);
div10_step(0x0008);
div10_step(0x0004);
div10_step(0x0002);
div10_step(0x0001);
#undef div10_step
if (dividend >= 5) ++quotient; // round the result (optional)
return quotient;
}
Well division is subtraction, so yes. Shift right by 1 (divide by 2). Now subtract 5 from the result, counting the number of times you do the subtraction until the value is less than 5. The result is number of subtractions you did. Oh, and dividing is probably going to be faster.
A hybrid strategy of shift right then divide by 5 using the normal division might get you a performance improvement if the logic in the divider doesn't already do this for you.
I've designed a new method in AVR assembly, with lsr/ror and sub/sbc only. It divides by 8, then sutracts the number divided by 64 and 128, then subtracts the 1,024th and the 2,048th, and so on and so on. Works very reliable (includes exact rounding) and quick (370 microseconds at 1 MHz).
The source code is here for 16-bit-numbers:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/div10_16rd.asm
The page that comments this source code is here:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/DIV10.html
I hope that it helps, even though the question is ten years old.
brgs, gsc
elemakil's comments' code can be found here: https://doc.lagout.org/security/Hackers%20Delight.pdf
page 233. "Unsigned divide by 10 [and 11.]"
I was adding a Fraction class to my codebase the other day (the first time, never needed one before and I doubt I do now, but what the hell :-)). When writing the addition between two fractions, I found a small optimization but it doesn't make sense (in the mathematical sense) why it is like it is.
To illustrate I will use fractions A and B, effecively consisting of An, Bn, Ad and Bd for numerator and denominator respectively.
Here are two functions I use for GCD/LCM, the formulas are on Wikipedia as well. They're simple enough to understand. The LCM one could just as well be (A*B)/C of course.
static unsigned int GreatestCommonDivisor(unsigned int A, unsigned int B)
{
return (!B) ? A : GreatestCommonDivisor(B, A % B);
}
static unsigned int LeastCommonMultiple(unsigned int A, unsigned int B)
{
const unsigned int gcDivisor = GreatestCommonDivisor(A, B);
return (A / gcDivisor) * B;
}
First lets go around the 1st approach:
least_common_mul = least_common_multiple(Ad, Bd)
new_nominator = An * (least_common_mul / Ad) + Bn * (least_common_mul / Bd)
new_denominator = least_common_mul
Voila, works, obvious, done.
Then through some scribbling on my notepad I came across another one that works:
greatest_common_div = greatest_common_divisor(Ad, Bd)
den_quot_a = Ad / greatest_common_div
den_quot_b = Bd / greatest_common_div
new_numerator = An * den_quot_b + Bn * den_quot_a
new_denominator = den_quot_a * Bd
Now the new denominator is fairly obvious, as it's exactly the same as happens in the LCD function. The other ones seem to make sense too, except for that the the right factors to multiply the original numerators with are swapped, in this line to be specific:
new_numerator = An * den_quot_b + Bn * den_quot_a
Why is that not AA + BB?
Input example: 5/12 & 11/18
greatest_common_div = 6
den_quot_a = 12/6 = 2;
den_quot_b = 18/6 = 3;
new_numerator = 5*3 + 11*2 = 37;
new_denominator = 36;
It's pretty straightforward, it's what you'd normally do to make fractions be over the same denominator - multiply each fraction's numerator and denominator by the factors that the other fraction has in its denominator that aren't present in the first.
2 is the factor of 36 which is missing from 18; 3 is the factor of 36 which is missing from 12. Thus, you multiply:
(5/12) * (3/3) ==> 15/36
(11/18) * (2/2) ==> 22/36
Perhaps you're missing one of the identities of number theory... for any two positive numbers m and n,
m*n = gcd(m,n) * lcm(m,n)
examples:
4*18 = 2 * 36
15*9 = 3 * 45
Finding a common denominator to fractions a/b and c/d involves using the lcm(b,d) or equivalently, bd/gcd(b,d).