Any Alternate to given if-else statement - math

I have very simple requirement. Currently in my program I am doing something like this.
int low=getLowValue();
if(low==20)
low=15;
else
low=(low/5)*5; // here '/' is for integer division
Is there any simple way to do this? Any one line statement which fulfill above condition(not ternary operator).
Thanks in advance.

Actually, a ternary operation is usually ideal for this sort of thing.
But, since you state you don't want that, you can also do it in "one"(a) line by providing your own function to do the heavy lifting:
int GetLowValueWithPreCheck(int checkFrom, int checkTo) {
int val = getLowValue();
if (val == checkFrom) return checkTo;
return (val / 5) * 5;
}
int low = getLowValueWithPreCheck(20, 15);
(a) Quoted since it refers to one line at the point of call - this is usually a good thing to do if you're going to be doing the operation in many places and want to minimise the code clutter.

Your code is readable and I would not change it. But if you really want, you can do this (in C++ at least):
low = (low - (low == 20)) / 5 * 5;
If low == 20 this results in (20 - 1) / 5 * 5 which evaluates to 15.
If low != 20 this results in low / 5 * 5.

The given code snippet is written following Java syntaxes to solve the required problem.
low = (low==20)?15:low;

Related

Understanding Recursion with Merge sort

I see some of the posts to understand merge sort. I know recursive methods maintains stack to hold values. (my understand was return statement result will be in stack )
private int recur(int count) {
if (count > 0) {
System.out.println(count);
return count + recur(--count); // this value will be in stack.
}
return count;
}
I am confusing in merge sort how stack is maintaining here.
private void divide(int low, int high) {
System.out.println("Divide => Low: "+ low +" High: "+ high);
if (low < high) {
int middle = (low + high) / 2;
divide(low, middle); // {0,7},{0,3}, {0,1} ;
divide(middle + 1, high); // {0,0}; high = 1; // 2nd divide
combine(low, middle, high);
}
}
Is stack for all local variables?
When 2nd recursive method calls, 1st recursive will also join?
How stack are maintained in such cases?
You only have to know that a statement needs to finish and return and that you call divide or combine from divide works the same. Both need to finish before the next line of code can be executed or, if there are no more lines, the function returns. Yes, it's done with stack but it's really not important.
The state of the waiters variables low, high and middle is only the current invocations bindings so they don't get mixed with other invocations.
Every time you nest a new call it gets it's own variables and each need to finish. When the low-middle is finished it calls middle+1-high and when that finished combine. Those calls will do the same so you will have deeper nesting and how the call structure will be visited is like like a binary tree structure with the leafs being low == high (one element).
A word of advice. When looking at recursive code try doing it from leaf to more complex tree. eg. try it out with base case first, then the simplest of default case. eg.
1 element array: does nothing
2 element array: -> 1 element array (see 1.), 1 element array, combine
4 element array: -> 2 element array (see 2.), 2 element array, combine
Notice that the 2. you know both recursive calls won't do anything and combine will do perhaps a swap. The 3. does 2. twice (including the swap) before combine that will merge 2 2 element arrays that are sorted. You are perhaps looking at it the other way, which requires you to halt 3. to do 2. that halts it and does 1., then the next 1, then back to 2. to do the text that has two 1s... It needs pen and paper. Looking at it from leaf to root using what you have learned of it so far lets you understand it much easier. I do think functional recursion is easier to grasp than mutating structures like your merge sort. eg. fibonacci sequence.

how to rewrite the recursive solution to iterative one

The problem is derive from OJ.
The description is :
We are playing the Guess Game. The game is as follows:
I pick a number from 1 to n. You have to guess which number I picked.
Every time you guess wrong, I'll tell you whether the number I picked is higher or lower.
However, when you guess a particular number x, and you guess wrong, you pay $x. You win the game when you guess the number I picked.
Given a particular n ≥ 1, find out how much money you need to have to guarantee a win.
I write small snippet about MinMax problem in recursion. But it is slow and I want to rewrite it in a iterative way. Could anyone help with that and give me the idea about how you convert the recursive solution to iterative one? Any idea is appreciated. The code is showed below:
public int getMoneyAmount(int n) {
int[][] dp = new int[n + 1][n + 1];
for(int i = 0; i < dp.length; i++)
Arrays.fill(dp[i], -1);
return solve(dp, 1, n);
}
private int solve(int[][] dp, int left, int right){
if(left >= right){
return 0;
}
if(dp[left][right] != -1){
return dp[left][right];
}
dp[left][right] = Integer.MAX_VALUE;
for(int i = left; i <= right; i++){
dp[left][right] = Math.min(dp[left][right], i + Math.max(solve(dp, left, i - 1),solve(dp, i + 1, right)));
}
return dp[left][right];
}
In general, you convert using some focused concepts:
Replace the recursion with a while loop -- or a for loop, if you can pre-determine how many iterations you need (which you can do in this case).
Within the loop, check for the recursion's termination conditions; when you hit one of those, skip the rest of the loop.
Maintain local variables to replace the parameters and return value.
The loop termination is completion of the entire problem. In your case, this would be filling out the entire dp array.
The loop body consists of the computations that are currently in your recursion step: preparing the arguments for the recursive call.
Your general approach is to step through a nested (2-D) loop to fill out your array, starting from the simplest cases (left = right) and working your way to the far corner (left = 1, right = n). Note that your main diagonal is 0 (initialize that before you get into the loop), and your lower triangle is unused (don't even bother to initialize it).
For the loop body, you should be able to derive how to fill in each succeeding diagonal (one element shorter in each iteration) from the one you just did. That assignment statement is the body. In this case, you don't need the recursion termination conditions: the one that returns 0 is what you cover in initialization; the other you never hit, controlling left and right with your loop indices.
Are these enough hints to get you moving?

Simple low pass filter in fixed point

I have a simple circuit setup to read the light level via an LDR into an Arduino. I'm trying to implement a simple low pass filter to data read in. How best to tackle this given that analogRead() returns an unsigned int.
I have tried to implement a simple fixed point representation but am unsure if this is the correct approach.
Here's a code snippet:
#define WLPF 0.1
#define FIXED_SHIFT 4
ldr_val = ((int)analogRead(A0)) << FIXED_SHIFT;
while (true) {
int newval = (int)analogRead(A0) << FIXED_SHIFT;
ldr_val += WLPF*(newval - ldr_val);
Serial.println(ldr_val >> FIXED_SHIFT, DEC);
}
Note the resolution of the ADC is 10 bits and I am working with an 8-bit Arduino Micro.
I'm paraphrasing from the book "Musical Applications of Microprocessors" by Hal Chamberlin, page 438:
If you allow large numbers in the accumulator, then you can make a first-order low-pass filter with one multiplication and some right-shifts.
out = accum >> k
accum = accum - out + in
Choose 'k' to change the cutoff frequency. The more shifts, the lower the low-pass cutoff, but the larger the value in the accumulator. With a 10-bit value from analog_read(), you can easily right-shift 4 places, and still have 2 bits of headroom in the accumulator (as #datafiddler noted above).
Cypress has some app-notes for their PSOC chips with similar equations, and using shifts. I remember one had a nice table that related number of shifts to the cutoff frequency.
The approximate cutoff frequency is the sampling frequency divided by 2-pi times the gain factor:
f0 ~ fs / (2 pi a)
where 'a' is that power of two.
Keep smoothin' those signals!
On a device with no FPU rather then multiplying by 0.1 (which in any case make this a floating not fixed point implementation) you should divide by 10:
#define WLPF_DIV 10
...
ldr_val += (newval - ldr_val) / WLPF_DIV;
However division on an 8 bit processor is often expensive (although probably dwarfed by the execution time of Serial.println() in the loop - but that is a different issue). Instead it is more efficient to select a power of two so that the division can be performed with a right-shift.
#define WLPF_SHIFT 3 // divide by 8
...
ldr_val += (newval - ldr_val) >> WLPF_SHIFT ;
The use of signed int is problematic since right-shift of a signed type is undefined behaviour. In this case this can be resolved by changing the code to:
#define WLPF_DIV 8
...
ldr_val += (newval - ldr_val) / WLPF_DIV ;
The compiler will most likely spot the power-of-two constant and generate the code using an arithmetic-shift-right in any case. However you would probably do better to reconsider the data type.
You still have a right-shift in the Serial.println() call, but that too could by replaced with a divide-by-16:
#define WLPF_DIV 8
#define FIXED_MUL 16
ldr_val = (int)analogRead(A0) * FIXED_MUL ;
for(;;)
{
int newval = (int)analogRead(A0) * FIXED_MUL ;
ldr_val += (newval - ldr_val) / WLPF_DIV
Serial.println(ldr_val / FIXED_MUL, DEC);
}
Non-deterministic output of the data on a per sample basis is not going to make for a very accurate filter and will dominate the timing in any case so you have little control over the frequency response and it will not be stable. It also makes the previous performance optimisations rather pointless. You may want to think about that if it is important in your application - but that is a different question.
Stick with integer arithmetics:
#define WLPF 9
filtered = ((long)filtered * WLPF + newValue) / (WLPF + 1);

Hacks for clamping integer to 0-255 and doubles to 0.0-1.0?

Are there any branch-less or similar hacks for clamping an integer to the interval of 0 to 255, or a double to the interval of 0.0 to 1.0? (Both ranges are meant to be closed, i.e. endpoints are inclusive.)
I'm using the obvious minimum-maximum check:
int value = (value < 0? 0 : value > 255? 255 : value);
but is there a way to get this faster -- similar to the "modulo" clamp value & 255? And is there a way to do similar things with floating points?
I'm looking for a portable solution, so preferably no CPU/GPU-specific stuff please.
This is a trick I use for clamping an int to a 0 to 255 range:
/**
* Clamps the input to a 0 to 255 range.
* #param v any int value
* #return {#code v < 0 ? 0 : v > 255 ? 255 : v}
*/
public static int clampTo8Bit(int v) {
// if out of range
if ((v & ~0xFF) != 0) {
// invert sign bit, shift to fill, then mask (generates 0 or 255)
v = ((~v) >> 31) & 0xFF;
}
return v;
}
That still has one branch, but a handy thing about it is that you can test whether any of several ints are out of range in one go by ORing them together, which makes things faster in the common case that all of them are in range. For example:
/** Packs four 8-bit values into a 32-bit value, with clamping. */
public static int ARGBclamped(int a, int r, int g, int b) {
if (((a | r | g | b) & ~0xFF) != 0) {
a = clampTo8Bit(a);
r = clampTo8Bit(r);
g = clampTo8Bit(g);
b = clampTo8Bit(b);
}
return (a << 24) + (r << 16) + (g << 8) + (b << 0);
}
Note that your compiler may already give you what you want if you code value = min (value, 255). This may be translated into a MIN instruction if it exists, or into a comparison followed by conditional move, such as the CMOVcc instruction on x86.
The following code assumes two's complement representation of integers, which is usually a given today. The conversion from Boolean to integer should not involve branching under the hood, as modern architectures either provide instructions that can directly be used to form the mask (e.g. SETcc on x86 and ISETcc on NVIDIA GPUs), or can apply predication or conditional moves. If all of those are lacking, the compiler may emit a branchless instruction sequence based on arithmetic right shift to construct a mask, along the lines of Boann's answer. However, there is some residual risk that the compiler could do the wrong thing, so when in doubt, it would be best to disassemble the generated binary to check.
int value, mask;
mask = 0 - (value > 255); // mask = all 1s if value > 255, all 0s otherwise
value = (255 & mask) | (value & ~mask);
On many architectures, use of the ternary operator ?: can also result in a branchless instruction sequences. The hardware may support select-type instructions which are essentially the hardware equivalent of the ternary operator, such as ICMP on NVIDIA GPUs. Or it provides CMOV (conditional move) as in x86, or predication as on ARM, both of which can be used to implement branch-less code for ternary operators. As in the previous case, one would want to examine the disassembled binary code to be absolutely sure the resulting code is without branches.
int value;
value = (value > 255) ? 255 : value;
In case of floating-point operands, modern floating-point units typically provide FMIN and FMAX instructions which map straight to the C/C++ standard math functions fmin() and fmax(). Alternatively fmin() and fmax() may be translated into a comparison followed by a conditional move. Again, it would be prudent to examine the generated code to make sure it is branchless.
double value;
value = fmax (fmin (value, 1.0), 0.0);
I use this thing, 100% branchless.
int clampU8(int val)
{
val &= (val<0)-1; // clamp < 0
val |= -(val>255); // clamp > 255
return val & 0xFF; // mask out
}
For those using C#, Kotlin or Java this is the best I could do, it's nice and succinct if somewhat cryptic:
(x & ~(x >> 31) | 255 - x >> 31) & 255
It only works on signed integers so that might be a blocker for some.
For clamping doubles, I'm afraid there's no language/platform agnostic solution.
The problem with floating point that they have options from fastest operations (MSVC /fp:fast, gcc -funsafe-math-optimizations) to fully precise and safe (MSVC /fp:strict, gcc -frounding-math -fsignaling-nans). In fully precise mode the compiler does not try to use any bit hacks, even if they could.
A solution that manipulates double bits cannot be portable. There may be different endianness, also there may be no (efficient) way to get double bits, double is not necessarily IEEE 754 binary64 after all. Plus direct manipulations will not cause signals for signaling NANs, when they are expected.
For integers most likely the compiler will do it right anyway, otherwise there are already good answers given.

Multiply number by 10 n times

Is there a better mathematical way to multiply a number by 10 n times in Dart than the following (below). I don't want to use the math library, because it would be overkill. It's no big deal; however if there's a better (more elegant) way than the "for loop", preferably one line, I'd like to know.
int iDecimals = 3;
int iValue = 1;
print ("${iValue} to power of ${iDecimals} = ");
for (int iLp1 = 1; iLp1 <= iDecimals; iLp1++) {
iValue *= 10;
}
print ("${iValue}");
You are not raising to a power of ten, you are multiplying by a power of ten. That is in your code the answer will be iValue * 10^(iDecimals) while raising to a power means iValue^10.
Now, your code still contains exponentiation and what it does is raises ten to the power iDecimals and then multiplies by iValue. Raising may be made way more efficient. (Disclaimer: I've never written a line of dart code before and I don't have an interpreter to test, so this might not work right away.)
int iValue = 1;
int p = 3;
int a = 10;
// The following code raises `a` to the power of `p`
int tmp = 1;
while (p > 1) {
if (p % 2 == 0) {
p /= 2;
} else {
c *= a;
p = (p - 1) / 2;
}
a *= a;
}
a *= t;
// in our example now `a` is 10^3
iValue *= a;
print ("${iValue}");
This exponentiation algorithm is very straightforward and it is known as Exponentiation by squaring.
Use the math library. Your idea of doing so being "overkill" is misguided. The following is easier to write, easier to read, fewer lines of code, and most likely faster than anything you might replace it with:
import 'dart:math';
void main() {
int iDecimals = 3;
int iValue = 1;
print("${iValue} times ten to the power of ${iDecimals} = ");
iValue *= pow(10, iDecimals);
print(iValue);
}
Perhaps you're deploying to JavaScript, concerned about deployment size, and unaware that dart2js does tree shaking?
Finally, if you do want to raise a number to the power of ten, as you asked for but didn't do, simply use pow(iValue, 10).
Considering that you don't want to use any math library, i think this is the best way to compute the power of a number. The time complexity of this code snippet also seems minimal. If you need a one line solution you will have to use some math library function.
Btw, you are not raising to the power but simply multiplying a number with 10 n times.
Are you trying to multiply something by a power of 10? If so, I believe Dart supports scientific notation. So the above value would be written as: iValue = 1e3;
Which is equal to 1000. If you want to raise the number itself to the power of ten, I think your only other option is to use the Math library.
Because the criteria was that the answer needed to not require the math library and needed to be fast and ideally a mathematical-solution (not String), and because using the exponential solution requires too much overhead - String, double, integer, I think that the only answer that meets the criteria is as follows :
for (int iLp1=0; iLp1<iDecimal; iLp1++, iScale*=10);
It is quite fast, doesn't require the "math" library, and is a one-liner

Resources