Calculating the modulo of two intervals - math

I want to understand how the modulus operator works when applied to two intervals. Adding, subtracting and multiplying two intervals is trivial to implement in code, but how do you do it for modulus?
I'd be happy if someone can show me the formula, sample code or a link which explains how it works.
Background info: You have two integers x_lo < x < x_hi and y_lo < y < y_hi. What is the the lower and upper bound for mod(x, y)?
Edit: I'm unsure if it is possible to come up with the minimal bounds in an efficient manner (without calculating the mod for all x or for all y). If so, then I'll accept an accurate but non-optimal answer for the bounds. Obviously, [-inf,+inf] is a correct answer then :) but I want a bound that is more limited in size.

It turns out, this is an interesting problem. The assumption I make is that for integer intervals, modulo is defined with respect to truncated division (round towards 0).
As a consequence, mod(-a,m) == -mod(a,m) for all a, m. Moreover, sign(mod(a,m)) == sign(a).
Definitions, before we start
Closed interval from a to b: [a,b]
Empty interval: [] := [+Inf,-Inf]
Negation: -[a,b] := [-b,-a]
Union: [a,b] u [c,d] := [min(a,c),max(b,d)]
Absolute value: |m| := max(m,-m)
Simpler Case: Fixed modulus m
It is easier to start with a fixed m. We will later generalize this to the modulo of two intervals. The definition builds up recursively. It should be no problem to implement this in your favorite programming language. Pseudocode:
def mod1([a,b], m):
// (1): empty interval
if a > b || m == 0:
return []
// (2): compute modulo with positive interval and negate
else if b < 0:
return -mod1([-b,-a], m)
// (3): split into negative and non-negative interval, compute and join
else if a < 0:
return mod1([a,-1], m) u mod1([0,b], m)
// (4): there is no k > 0 such that a < k*m <= b
else if b-a < |m| && a % m <= b % m:
return [a % m, b % m]
// (5): we can't do better than that
else
return [0,|m|-1]
Up to this point, we can't do better than that. The resulting interval in (5) might be an over-approximation, but it is the best we can get. If we were allowed to return a set of intervals, we could be more precise.
General case
The same ideas apply to the case where our modulus is an interval itself. Here we go:
def mod2([a,b], [m,n]):
// (1): empty interval
if a > b || m > n:
return []
// (2): compute modulo with positive interval and negate
else if b < 0:
return -mod2([-b,-a], [m,n])
// (3): split into negative and non-negative interval, compute, and join
else if a < 0:
return mod2([a,-1], [m,n]) u mod2([0,b], [m,n])
// (4): use the simpler function from before
else if m == n:
return mod1([a,b], m)
// (5): use only non-negative m and n
else if n <= 0:
return mod2([a,b], [-n,-m])
// (6): similar to (5), make modulus non-negative
else if m <= 0:
return mod2([a,b], [1, max(-m,n)])
// (7): compare to (4) in mod1, check b-a < |modulus|
else if b-a >= n:
return [0,n-1]
// (8): similar to (7), split interval, compute, and join
else if b-a >= m:
return [0, b-a-1] u mod2([a,b], [b-a+1,n])
// (9): modulo has no effect
else if m > b:
return [a,b]
// (10): there is some overlapping of [a,b] and [n,m]
else if n > b:
return [0,b]
// (11): either compute all possibilities and join, or be imprecise
else:
return [0,n-1] // imprecise
Have fun! :)

Let see mod(x, y) = mod.
In general 0 <= mod <= y. So it's always true: y_lo < mod < y_hi
But we can see some specific cases below:
- if: x_hi < y_lo then div(x, y) = 0, then x_low < mod < x_hi
- if: x_low > y_hi then div(x, y) > 0, then y_low < mod < y_hi
- if: x_low < y_low < y_hi < x_hi, then y_low < mod < y_hi
- if: x_low < y_low < x_hi < y_hi, then y_low < mod < x_hi
- if: y_low < x_low < y_hi < x_hi, then y_low < mod < y_hi
....

Related

cvxpy contrained normalization equations (abs)

I am working in an optimization problem (A*v = b) where I would like to rank a set of alternatives X = {x1,x2,x3,x4}. However, I have the following normalization constraint: |v[i] - v[j]| <= 1, which can be in the form -1 <= v[i] - v[j] <= 1.
My code is as follows:
import cvxpy as cp
n = len(X) #set of alternatives
v = cp.Variable(n)
objective = cp.Minimize(cp.sum_squares(A*v - b))
constraints = [0 <= v]
#Normalization condition -1 <= v[i] - v[j] <= 1
for i in range(n):
for j in range(n):
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
prob = cp.Problem(objective, constraints)
# The optimal objective value is returned by `prob.solve()`.
result = prob.solve()
# The optimal value for v is stored in `v.value`.
va2 = v.value
Which outputs:
[-0.15 0.45 -0.35 0.05]
Result, which is not close to what should be and even have negative values. I think, my code for the normalization contraint most probably is wrong.
You are not appending your constraints, instead you are overwriting them each time. Instead of this line
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
You should have
constraints += [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
For cleanliness you may want to change this
for i in range(n):
for j in range(n):
To only consider each pair once:
for i in range(n):
for j in range(i+1, n):

Implementing FFT over finite fields

I would like to implement multiplication of polynomials using NTT. I followed Number-theoretic transform (integer DFT) and it seems to work.
Now I would like to implement multiplication of polynomials over finite fields Z_p[x] where p is arbitrary prime number.
Does it changes anything that the coefficients are now bounded by p, compared to the former unbounded case?
In particular, original NTT required to find prime number N as the working modulus that is larger than (magnitude of largest element of input vector)^2 * (length of input vector) + 1 so that the result never overflows. If the result is going to be bounded by that p prime anyway, how small can the modulus be? Note that p - 1 does not have to be of form (some positive integer) * (length of input vector).
Edit: I copy-pasted the source from the link above to illustrate the problem:
#
# Number-theoretic transform library (Python 2, 3)
#
# Copyright (c) 2017 Project Nayuki
# All rights reserved. Contact Nayuki for licensing.
# https://www.nayuki.io/page/number-theoretic-transform-integer-dft
#
import itertools, numbers
def find_params_and_transform(invec, minmod):
check_int(minmod)
mod = find_modulus(len(invec), minmod)
root = find_primitive_root(len(invec), mod - 1, mod)
return (transform(invec, root, mod), root, mod)
def check_int(n):
if not isinstance(n, numbers.Integral):
raise TypeError()
def find_modulus(veclen, minimum):
check_int(veclen)
check_int(minimum)
if veclen < 1 or minimum < 1:
raise ValueError()
start = (minimum - 1 + veclen - 1) // veclen
for i in itertools.count(max(start, 1)):
n = i * veclen + 1
assert n >= minimum
if is_prime(n):
return n
def is_prime(n):
check_int(n)
if n <= 1:
raise ValueError()
return all((n % i != 0) for i in range(2, sqrt(n) + 1))
def sqrt(n):
check_int(n)
if n < 0:
raise ValueError()
i = 1
while i * i <= n:
i *= 2
result = 0
while i > 0:
if (result + i)**2 <= n:
result += i
i //= 2
return result
def find_primitive_root(degree, totient, mod):
check_int(degree)
check_int(totient)
check_int(mod)
if not (1 <= degree <= totient < mod):
raise ValueError()
if totient % degree != 0:
raise ValueError()
gen = find_generator(totient, mod)
root = pow(gen, totient // degree, mod)
assert 0 <= root < mod
return root
def find_generator(totient, mod):
check_int(totient)
check_int(mod)
if not (1 <= totient < mod):
raise ValueError()
for i in range(1, mod):
if is_generator(i, totient, mod):
return i
raise ValueError("No generator exists")
def is_generator(val, totient, mod):
check_int(val)
check_int(totient)
check_int(mod)
if not (0 <= val < mod):
raise ValueError()
if not (1 <= totient < mod):
raise ValueError()
pf = unique_prime_factors(totient)
return pow(val, totient, mod) == 1 and all((pow(val, totient // p, mod) != 1) for p in pf)
def unique_prime_factors(n):
check_int(n)
if n < 1:
raise ValueError()
result = []
i = 2
end = sqrt(n)
while i <= end:
if n % i == 0:
n //= i
result.append(i)
while n % i == 0:
n //= i
end = sqrt(n)
i += 1
if n > 1:
result.append(n)
return result
def transform(invec, root, mod):
check_int(root)
check_int(mod)
if len(invec) >= mod:
raise ValueError()
if not all((0 <= val < mod) for val in invec):
raise ValueError()
if not (1 <= root < mod):
raise ValueError()
outvec = []
for i in range(len(invec)):
temp = 0
for (j, val) in enumerate(invec):
temp += val * pow(root, i * j, mod)
temp %= mod
outvec.append(temp)
return outvec
def inverse_transform(invec, root, mod):
outvec = transform(invec, reciprocal(root, mod), mod)
scaler = reciprocal(len(invec), mod)
return [(val * scaler % mod) for val in outvec]
def reciprocal(n, mod):
check_int(n)
check_int(mod)
if not (0 <= n < mod):
raise ValueError()
x, y = mod, n
a, b = 0, 1
while y != 0:
a, b = b, a - x // y * b
x, y = y, x % y
if x == 1:
return a % mod
else:
raise ValueError("Reciprocal does not exist")
def circular_convolve(vec0, vec1):
if not (0 < len(vec0) == len(vec1)):
raise ValueError()
if any((val < 0) for val in itertools.chain(vec0, vec1)):
raise ValueError()
maxval = max(val for val in itertools.chain(vec0, vec1))
minmod = maxval**2 * len(vec0) + 1
temp0, root, mod = find_params_and_transform(vec0, minmod)
temp1 = transform(vec1, root, mod)
temp2 = [(x * y % mod) for (x, y) in zip(temp0, temp1)]
return inverse_transform(temp2, root, mod)
vec0 = [24, 12, 28, 8, 0, 0, 0, 0]
vec1 = [4, 26, 29, 23, 0, 0, 0, 0]
print(circular_convolve(vec0, vec1))
def modulo(vec, prime):
return [x % prime for x in vec]
print(modulo(circular_convolve(vec0, vec1), 31))
Prints:
[96, 672, 1120, 1660, 1296, 876, 184, 0]
[3, 21, 4, 17, 25, 8, 29, 0]
However, where I change minmod = maxval**2 * len(vec0) + 1 to minmod = maxval + 1, it stops working:
[14, 16, 13, 20, 25, 15, 20, 0]
[14, 16, 13, 20, 25, 15, 20, 0]
What is the smallest minmod (N in the link above) be in order to work as expected?
If your input of n integers is bound to some prime q (any mod q not just prime will be the same) You can use it as a max value +1 but beware you can not use it as a prime p for the NTT because NTT prime p has special properties. All of them are here:
Translation from Complex-FFT to Finite-Field-FFT
so our max value of each input is q-1 but during your task computation (Convolution on 2 NTT results) the magnitude of first layer results can rise up to n.(q-1) but as we are doing convolution on them the input magnitude of final iNTT will rise up to:
m = n.((q-1)^2)
If you are doing different operations on the NTTs than the m equation might change.
Now let us get back to the p so in a nutshell you can use any prime p that upholds these:
p mod n == 1
p > m
and there exist 1 <= r,L < p such that:
p mod (L-1) = 0
r^(L*i) mod p == 1 // i = { 0,n }
r^(L*i) mod p != 1 // i = { 1,2,3, ... n-1 }
If all this is satisfied then p is nth root of unity and can be used for NTT. To find such prime and also the r,L look at the link above (there is C++ code that finds such).
For example during string multiplication we take 2 strings do NTT on them then convolute the result and iNTT back the result (that is sum of both input sizes). So for example:
99999999999999999999999999999999
*99999999999999999999999999999999
----------------------------------------------------------------
9999999999999999999999999999999800000000000000000000000000000001
the q = 10 and both operands are 9^32 so n=32 hence m = 9*9*32 = 2592 and the found prime is p = 2689. As you can see the result matches so no overflow occurs. However if I use any smaller prime that still fit all the other conditions the result will not match. I used this specifically to stretch the NTT values as much as possible (all values are q-1 and sizes are equal to the same power of 2)
In case your NTT is fast and n is not a power of 2 then you need to zero pad to nearest higher or equal power of 2 size for each NTT. But that should not affect the m value as zero pad should not increase the magnitude of values. My testing proves it so for convolution you can use:
m = (n1+n2).((q-1)^2)/2
where n1,n2 are the raw inputs sizes before zeropad.
For more info about implementing NTT you can check out mine in C++ (extensively optimized):
Modular arithmetics and NTT (finite field DFT) optimizations
So to answer your questions:
yes you can take advantage of the fact that input is mod q but you can not use q as p !!!
You can use minmod = n * (maxval + 1) only for single NTT (or first layer of NTTs) but as you are chaining them with convolution during your NTT usage you can not use that for the final INTT stage !!!
However as I mentioned in the comments easiest is to use max possible p that fits in the data type you are using and is usable for all power of 2 sizes of input supported.
Which basically renders your question irrelevant. The only case I can think of where this is not possible/desired is on arbitrary precision numbers where there is "no" max limit. There are many performance issues binded to variable p as the search for p is really slow (may be even slower than the NTT itself) and also variable p disables many performance optimizations of the modular arithmetics needed making the NTT really slow.

Is it safe to replace "a/(b*c)" with "a/b/c" when using integer-division?

Is it safe to replace a/(b*c) with a/b/c when using integer-division on positive integers a,b,c, or am I at risk losing information?
I did some random tests and couldn't find an example of a/(b*c) != a/b/c, so I'm pretty sure it's safe but not quite sure how to prove it.
Thank you.
Mathematics
As mathematical expressions, ⌊a/(bc)⌋ and ⌊⌊a/b⌋/c⌋ are equivalent whenever b is nonzero and c is a positive integer (and in particular for positive integers a, b, c). The standard reference for these sorts of things is the delightful book Concrete Mathematics: A Foundation for Computer Science by Graham, Knuth and Patashnik. In it, Chapter 3 is mostly on floors and ceilings, and this is proved on page 71 as a part of a far more general result:
In the 3.10 above, you can define x = a/b (mathematical, i.e. real division), and f(x) = x/c (exact division again), and plug those into the result on the left ⌊f(x)⌋ = ⌊f(⌊x⌋)⌋ (after verifying that the conditions on f hold here) to get ⌊a/(bc)⌋ on the LHS equal to ⌊⌊a/b⌋/c⌋ on the RHS.
If we don't want to rely on a reference in a book, we can prove ⌊a/(bc)⌋ = ⌊⌊a/b⌋/c⌋ directly using their methods. Note that with x = a/b (the real number), what we're trying to prove is that ⌊x/c⌋ = ⌊⌊x⌋/c⌋. So:
if x is an integer, then there is nothing to prove, as x = ⌊x⌋.
Otherwise, ⌊x⌋ < x, so ⌊x⌋/c < x/c which means that ⌊⌊x⌋/c⌋ ≤ ⌊x/c⌋. (We want to show it's equal.) Suppose, for the sake of contradiction, that ⌊⌊x⌋/c⌋ < ⌊x/c⌋ then there must be a number y such that ⌊x⌋ < y ≤ x and y/c = ⌊x/c⌋. (As we increase a number from ⌊x⌋ to x and consider division by c, somewhere we must hit the exact value ⌊x/c⌋.) But this means that y = c*⌊x/c⌋ is an integer between ⌊x⌋ and x, which is a contradiction!
This proves the result.
Programming
#include <stdio.h>
int main() {
unsigned int a = 142857;
unsigned int b = 65537;
unsigned int c = 65537;
printf("a/(b*c) = %d\n", a/(b*c));
printf("a/b/c = %d\n", a/b/c);
}
prints (with 32-bit integers),
a/(b*c) = 1
a/b/c = 0
(I used unsigned integers as overflow behaviour for them is well-defined, so the above output is guaranteed. With signed integers, overflow is undefined behaviour, so the program can in fact print (or do) anything, which only reinforces the point that the results can be different.)
But if you don't have overflow, then the values you get in your program are equal to their mathematical values (that is, a/(b*c) in your code is equal to the mathematical value ⌊a/(bc)⌋, and a/b/c in code is equal to the mathematical value ⌊⌊a/b⌋/c⌋), which we've proved are equal. So it is safe to replace a/(b*c) in code by a/b/c when b*c is small enough not to overflow.
While b*c could overflow (in C) for the original computation, a/b/c can't overflow, so we don't need to worry about overflow for the forward replacement a/(b*c) -> a/b/c. We would need to worry about it the other way around, though.
Let x = a/b/c. Then a/b == x*c + y for some y < c, and a == (x*c + y)*b + z for some z < b.
Thus, a == x*b*c + y*b + z. y*b + z is at most b*c-1, so x*b*c <= a <= (x+1)*b*c, and a/(b*c) == x.
Thus, a/b/c == a/(b*c), and replacing a/(b*c) by a/b/c is safe.
Nested floor division can be reordered as long as you keep track of your divisors and dividends.
#python3.x
x // m // n = x // (m * n)
#python2.x
x / m / n = x / (m * n)
Proof (sucks without LaTeX :( ) in python3.x:
Let k = x // m
then k - 1 < x / m <= k
and (k - 1) / n < x / (m * n) <= k / n
In addition, (x // m) // n = k // n
and because x // m <= x / m and (x // m) // n <= (x / m) // n
k // n <= x // (m * n)
Now, if k // n < x // (m * n)
then k / n < x / (m * n)
and this contradicts the above statement that x / (m * n) <= k / n
so if k // n <= x // (m * n) and k // n !< x // (m * n)
then k // n = x // (m * n)
and (x // m) // n = x // (m * n)
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Nested_divisions

Math Problem: Scale a graph so that it matches another

I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}

Math Mod Containing Numbers

i would like to write a simple line of code, without resorting to if statements, that would evaluate whether a number is within a certain range. i can evaluate from 0 - Max by using the modulus.
30 % 90 = 30 //great
however, if the test number is greater than the maximum, using modulus will simply start it at 0 for the remaining, where as i would like to limit it to the maximum if it's past the maximum
94 % 90 = 4 //i would like answer to be 90
it becomes even more complicated, to me anyway, if i introduce a minimum for the range. for example:
minimum = 10
maximum = 90
therefore, any number i evaluate should be either within range, or the minimum value if it's below range and the maximum value if it's above range
-76 should be 10
2 should be 10
30 should be 30
89 should be 89
98 should be 90
23553 should be 90
is it possible to evaluate this with one line of code without using if statements?
Probably the simplest way is to use whatever max and min are available in your language like this:
max(10, min(number, 90))
In some languages, e.g. Java, JavaScript, and C# (and probably others) max and min are static methods of the Math class.
I've used a clip function to make it easier (this is in JavaScript):
function clip(min, number, max) {
return Math.max(min, Math.min(number, max));
}
simple, but still branches even though if is not used:
r = ( x < minimum ) ? minimum : ( x > maximum ) ? maximum : x;
from bit twiddling hacks, assuming (2<3) == 1:
r = y ^ ((x ^ y) & -(x < y)); // min(x, y)
r = x ^ ((x ^ y) & -(x < y)); // max(x, y)
putting it together, assuming min < max:
r = min^(((max^((x^max)&-(max<x)))^min)&-(x<min));
how it works when x<y:
r = y ^ ((x ^ y) & -(x < y));
r = y ^ ((x ^ y) & -(1)); // x<y == 1
r = y ^ ((x ^ y) & ~0); // -1 == ~0
r = y ^ (x ^ y); // (x^y) & ~0 == (x^y)
r = y ^ x ^ y; // y^y == 0
r = x;
otherwise:
r = y ^ ((x ^ y) & -(x < y));
r = y ^ ((x ^ y) & -(0)); // x<y == 0
r = y ^ ((x ^ y) & 0); // -0 == 0
r = y; // (x^y) & 0 == 0
If you are using a language that has a ternary operator (such as C or Java), you could do it like this:
t < lo ? lo : (t > hi ? hi : t)
where t is the test variable, and lo and hi are the limits. That satisfies your constraints, in that it doesn't strictly use if-statements, but the ternary operator is really just syntactic sugar for an if-statement.
Using C/C++:
value = min*(number < min) +
max*(number > max) +
(number <= max && number >= min)*number%max;
The following is a brief explanation. Note that the code depends on 2 important issues to work correctly. First, in C/C++ a boolean expression can be converted to an integer. Second, the reminder of a negative number is the number it self. So, it is not the mathematical definition of the remainder. I am not sure if this is defined by the C/C++ standards or it is left to the implementation. Basically:
if number < min then:
value = min*1 +
max*0 +
0*number%max;
else if number > max
value = min*0 +
max*1 +
0*number%max;
else
value = min*1 +
max*1 +
1*number%max;
I don't see how you could...
(X / 10) < 1 ? 10 : (X / 90 > 1 ? 90 : X)
Number divided by 10 is less than 1? set to 10
Else
If number divided by 90 is greater than 90, set to 90
Else
set to X
Note that it's still hidden ifs. :(

Resources