Arduino temperature conversion issues - arduino

my code won't convert Celsius to fahrenheit. The snippet of code is below. I can't find my mistake. my code outputs 90 when it should output 136.4.
float c= 58;
float temp= (((9 / 5) * c + 32));
Serial.println(temp);

You are falling foul of integer division, for which 9 / 5 is equal to 1, due to truncation. Change this line:
float temp= (((9 / 5) * c + 32));
to:
float temp= (((9.0 / 5.0) * c + 32.0));

Using floating point operations on an arduino is costly. Contrary to most PC/supposedly smart phones etc., the arduino processor has no native floating point support. You might want to use integer arithmetics instead.
As a general rule, to avoid unwanted rounding, you should do all multiplications first, and only then divide. Unless the accumulated products overflow, of course.
here it would be :
int temp = (c*9)/5 + 32;
If you really need precision, you can for instance store the temperature in 1/100th of °.

Related

How to find reverse percentage increase in javascript [duplicate]

I have the following dummy test script:
function test() {
var x = 0.1 * 0.2;
document.write(x);
}
test();
This will print the result 0.020000000000000004 while it should just print 0.02 (if you use your calculator). As far as I understood this is due to errors in the floating point multiplication precision.
Does anyone have a good solution so that in such case I get the correct result 0.02? I know there are functions like toFixed or rounding would be another possibility, but I'd like to really have the whole number printed without any cutting and rounding. Just wanted to know if one of you has some nice, elegant solution.
Of course, otherwise I'll round to some 10 digits or so.
From the Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you
work with money: use a special decimal
datatype.
If you just don’t want to see all those extra decimal places: simply
format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.
If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.
I like Pedro Ladaria's solution and use something similar.
function strip(number) {
return (parseFloat(number).toPrecision(12));
}
Unlike Pedros solution this will round up 0.999...repeating and is accurate to plus/minus one on the least significant digit.
Note: When dealing with 32 or 64 bit floats, you should use toPrecision(7) and toPrecision(15) for best results. See this question for info as to why.
For the mathematically inclined: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The recommended approach is to use correction factors (multiply by a suitable power of 10 so that the arithmetic happens between integers). For example, in the case of 0.1 * 0.2, the correction factor is 10, and you are performing the calculation:
> var x = 0.1
> var y = 0.2
> var cf = 10
> x * y
0.020000000000000004
> (x * cf) * (y * cf) / (cf * cf)
0.02
A (very quick) solution looks something like:
var _cf = (function() {
function _shift(x) {
var parts = x.toString().split('.');
return (parts.length < 2) ? 1 : Math.pow(10, parts[1].length);
}
return function() {
return Array.prototype.reduce.call(arguments, function (prev, next) { return prev === undefined || next === undefined ? undefined : Math.max(prev, _shift (next)); }, -Infinity);
};
})();
Math.a = function () {
var f = _cf.apply(null, arguments); if(f === undefined) return undefined;
function cb(x, y, i, o) { return x + f * y; }
return Array.prototype.reduce.call(arguments, cb, 0) / f;
};
Math.s = function (l,r) { var f = _cf(l,r); return (l * f - r * f) / f; };
Math.m = function () {
var f = _cf.apply(null, arguments);
function cb(x, y, i, o) { return (x*f) * (y*f) / (f * f); }
return Array.prototype.reduce.call(arguments, cb, 1);
};
Math.d = function (l,r) { var f = _cf(l,r); return (l * f) / (r * f); };
In this case:
> Math.m(0.1, 0.2)
0.02
I definitely recommend using a tested library like SinfulJS
Are you only performing multiplication? If so then you can use to your advantage a neat secret about decimal arithmetic. That is that NumberOfDecimals(X) + NumberOfDecimals(Y) = ExpectedNumberOfDecimals. That is to say that if we have 0.123 * 0.12 then we know that there will be 5 decimal places because 0.123 has 3 decimal places and 0.12 has two. Thus if JavaScript gave us a number like 0.014760000002 we can safely round to the 5th decimal place without fear of losing precision.
Surprisingly, this function has not been posted yet although others have similar variations of it. It is from the MDN web docs for Math.round().
It's concise and allows for varying precision.
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
console.log(precisionRound(1234.5678, 1));
// expected output: 1234.6
console.log(precisionRound(1234.5678, -1));
// expected output: 1230
var inp = document.querySelectorAll('input');
var btn = document.querySelector('button');
btn.onclick = function(){
inp[2].value = precisionRound( parseFloat(inp[0].value) * parseFloat(inp[1].value) , 5 );
};
//MDN function
function precisionRound(number, precision) {
var factor = Math.pow(10, precision);
return Math.round(number * factor) / factor;
}
button{
display: block;
}
<input type='text' value='0.1'>
<input type='text' value='0.2'>
<button>Get Product</button>
<input type='text'>
UPDATE: Aug/20/2019
Just noticed this error. I believe it's due to a floating point precision error with Math.round().
precisionRound(1.005, 2) // produces 1, incorrect, should be 1.01
These conditions work correctly:
precisionRound(0.005, 2) // produces 0.01
precisionRound(1.0005, 3) // produces 1.001
precisionRound(1234.5, 0) // produces 1235
precisionRound(1234.5, -1) // produces 1230
Fix:
function precisionRoundMod(number, precision) {
var factor = Math.pow(10, precision);
var n = precision < 0 ? number : 0.01 / factor + number;
return Math.round( n * factor) / factor;
}
This just adds a digit to the right when rounding decimals.
MDN has updated the Math.round() page so maybe someone could provide a better solution.
I'm finding BigNumber.js meets my needs.
A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
It has good documentation and the author is very diligent responding to feedback.
The same author has 2 other similar libraries:
Big.js
A small, fast JavaScript library for arbitrary-precision decimal arithmetic. The little sister to bignumber.js.
and Decimal.js
An arbitrary-precision Decimal type for JavaScript.
Here's some code using BigNumber:
$(function(){
var product = BigNumber(.1).times(.2);
$('#product').text(product);
var sum = BigNumber(.1).plus(.2);
$('#sum').text(sum);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- 1.4.1 is not the current version, but works for this example. -->
<script src="http://cdn.bootcss.com/bignumber.js/1.4.1/bignumber.min.js"></script>
.1 × .2 = <span id="product"></span><br>
.1 &plus; .2 = <span id="sum"></span><br>
You are looking for an sprintf implementation for JavaScript, so that you can write out floats with small errors in them (since they are stored in binary format) in a format that you expect.
Try javascript-sprintf, you would call it like this:
var yourString = sprintf("%.2f", yourNumber);
to print out your number as a float with two decimal places.
You may also use Number.toFixed() for display purposes, if you'd rather not include more files merely for floating point rounding to a given precision.
var times = function (a, b) {
return Math.round((a * b) * 100)/100;
};
---or---
var fpFix = function (n) {
return Math.round(n * 100)/100;
};
fpFix(0.1*0.2); // -> 0.02
---also---
var fpArithmetic = function (op, x, y) {
var n = {
'*': x * y,
'-': x - y,
'+': x + y,
'/': x / y
}[op];
return Math.round(n * 100)/100;
};
--- as in ---
fpArithmetic('*', 0.1, 0.2);
// 0.02
fpArithmetic('+', 0.1, 0.2);
// 0.3
fpArithmetic('-', 0.1, 0.2);
// -0.1
fpArithmetic('/', 0.2, 0.1);
// 2
You can use parseFloat() and toFixed() if you want to bypass this issue for a small operation:
a = 0.1;
b = 0.2;
a + b = 0.30000000000000004;
c = parseFloat((a+b).toFixed(2));
c = 0.3;
a = 0.3;
b = 0.2;
a - b = 0.09999999999999998;
c = parseFloat((a-b).toFixed(2));
c = 0.1;
You just have to make up your mind on how many decimal digits you actually want - can't have the cake and eat it too :-)
Numerical errors accumulate with every further operation and if you don't cut it off early it's just going to grow. Numerical libraries which present results that look clean simply cut off the last 2 digits at every step, numerical co-processors also have a "normal" and "full" lenght for the same reason. Cuf-offs are cheap for a processor but very expensive for you in a script (multiplying and dividing and using pov(...)). Good math lib would provide floor(x,n) to do the cut-off for you.
So at the very least you should make global var/constant with pov(10,n) - meaning that you decided on the precision you need :-) Then do:
Math.floor(x*PREC_LIM)/PREC_LIM // floor - you are cutting off, not rounding
You could also keep doing math and only cut-off at the end - assuming that you are only displaying and not doing if-s with results. If you can do that, then .toFixed(...) might be more efficient.
If you are doing if-s/comparisons and don't want to cut of then you also need a small constant, usually called eps, which is one decimal place higher than max expected error. Say that your cut-off is last two decimals - then your eps has 1 at the 3rd place from the last (3rd least significant) and you can use it to compare whether the result is within eps range of expected (0.02 -eps < 0.1*0.2 < 0.02 +eps).
Notice that for the general purpose use, this behavior is likely to be acceptable.
The problem arises when comparing those floating points values to determine an appropriate action.
With the advent of ES6, a new constant Number.EPSILON is defined to determine the acceptable error margin :
So instead of performing the comparison like this
0.1 + 0.2 === 0.3 // which returns false
you can define a custom compare function, like this :
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Source : http://2ality.com/2015/04/numbers-math-es6.html#numberepsilon
The result you've got is correct and fairly consistent across floating point implementations in different languages, processors and operating systems - the only thing that changes is the level of the inaccuracy when the float is actually a double (or higher).
0.1 in binary floating points is like 1/3 in decimal (i.e. 0.3333333333333... forever), there's just no accurate way to handle it.
If you're dealing with floats always expect small rounding errors, so you'll also always have to round the displayed result to something sensible. In return you get very very fast and powerful arithmetic because all the computations are in the native binary of the processor.
Most of the time the solution is not to switch to fixed-point arithmetic, mainly because it's much slower and 99% of the time you just don't need the accuracy. If you're dealing with stuff that does need that level of accuracy (for instance financial transactions) Javascript probably isn't the best tool to use anyway (as you've want to enforce the fixed-point types a static language is probably better).
You're looking for the elegant solution then I'm afraid this is it: floats are quick but have small rounding errors - always round to something sensible when displaying their results.
The round() function at phpjs.org works nicely: http://phpjs.org/functions/round
num = .01 + .06; // yields 0.0699999999999
rnum = round(num,12); // yields 0.07
decimal.js, big.js or bignumber.js can be used to avoid floating-point manipulation problems in Javascript:
0.1 * 0.2 // 0.020000000000000004
x = new Decimal(0.1)
y = x.times(0.2) // '0.2'
x.times(0.2).equals(0.2) // true
big.js: minimalist; easy-to-use; precision specified in decimal places; precision applied to division only.
bignumber.js: bases 2-64; configuration options; NaN; Infinity; precision specified in decimal places; precision applied to division only; base prefixes.
decimal.js: bases 2-64; configuration options; NaN; Infinity; non-integer powers, exp, ln, log; precision specified in significant digits; precision always applied; random numbers.
link to detailed comparisons
0.6 * 3 it's awesome!))
For me this works fine:
function dec( num )
{
var p = 100;
return Math.round( num * p ) / p;
}
Very very simple))
To avoid this you should work with integer values instead of floating points. So when you want to have 2 positions precision work with the values * 100, for 3 positions use 1000. When displaying you use a formatter to put in the separator.
Many systems omit working with decimals this way. That is the reason why many systems work with cents (as integer) instead of dollars/euro's (as floating point).
not elegant but does the job (removes trailing zeros)
var num = 0.1*0.2;
alert(parseFloat(num.toFixed(10))); // shows 0.02
Problem
Floating point can't store all decimal values exactly. So when using floating point formats there will always be rounding errors on the input values.
The errors on the inputs of course results on errors on the output.
In case of a discrete function or operator there can be big differences on the output around the point where the function or operator is discrete.
Input and output for floating point values
So, when using floating point variables, you should always be aware of this. And whatever output you want from a calculation with floating points should always be formatted/conditioned before displaying with this in mind.
When only continuous functions and operators are used, rounding to the desired precision often will do (don't truncate). Standard formatting features used to convert floats to string will usually do this for you.
Because the rounding adds an error which can cause the total error to be more then half of the desired precision, the output should be corrected based on expected precision of inputs and desired precision of output. You should
Round inputs to the expected precision or make sure no values can be entered with higher precision.
Add a small value to the outputs before rounding/formatting them which is smaller than or equal to 1/4 of the desired precision and bigger than the maximum expected error caused by rounding errors on input and during calculation. If that is not possible the combination of the precision of the used data type isn't enough to deliver the desired output precision for your calculation.
These 2 things are usually not done and in most cases the differences caused by not doing them are too small to be important for most users, but I already had a project where output wasn't accepted by the users without those corrections.
Discrete functions or operators (like modula)
When discrete operators or functions are involved, extra corrections might be required to make sure the output is as expected. Rounding and adding small corrections before rounding can't solve the problem.
A special check/correction on intermediate calculation results, immediately after applying the discrete function or operator might be required.
For a specific case (modula operator), see my answer on question: Why does modulus operator return fractional number in javascript?
Better avoid having the problem
It is often more efficient to avoid these problems by using data types (integer or fixed point formats) for calculations like this which can store the expected input without rounding errors.
An example of that is that you should never use floating point values for financial calculations.
Elegant, Predictable, and Reusable
Let's deal with the problem in an elegant way reusable way. The following seven lines will let you access the floating point precision you desire on any number simply by appending .decimal to the end of the number, formula, or built in Math function.
// First extend the native Number object to handle precision. This populates
// the functionality to all math operations.
Object.defineProperty(Number.prototype, "decimal", {
get: function decimal() {
Number.precision = "precision" in Number ? Number.precision : 3;
var f = Math.pow(10, Number.precision);
return Math.round( this * f ) / f;
}
});
// Now lets see how it works by adjusting our global precision level and
// checking our results.
console.log("'1/3 + 1/3 + 1/3 = 1' Right?");
console.log((0.3333 + 0.3333 + 0.3333).decimal == 1); // true
console.log(0.3333.decimal); // 0.333 - A raw 4 digit decimal, trimmed to 3...
Number.precision = 3;
console.log("Precision: 3");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0.001
Number.precision = 2;
console.log("Precision: 2");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0.01
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 1;
console.log("Precision: 1");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0.1
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Number.precision = 0;
console.log("Precision: 0");
console.log((0.8 + 0.2).decimal); // 1
console.log((0.08 + 0.02).decimal); // 0
console.log((0.008 + 0.002).decimal); // 0
console.log((0.0008 + 0.0002).decimal); // 0
Cheers!
Solved it by first making both numbers integers, executing the expression and afterwards dividing the result to get the decimal places back:
function evalMathematicalExpression(a, b, op) {
const smallest = String(a < b ? a : b);
const factor = smallest.length - smallest.indexOf('.');
for (let i = 0; i < factor; i++) {
b *= 10;
a *= 10;
}
a = Math.round(a);
b = Math.round(b);
const m = 10 ** factor;
switch (op) {
case '+':
return (a + b) / m;
case '-':
return (a - b) / m;
case '*':
return (a * b) / (m ** 2);
case '/':
return a / b;
}
throw `Unknown operator ${op}`;
}
Results for several operations (the excluded numbers are results from eval):
0.1 + 0.002 = 0.102 (0.10200000000000001)
53 + 1000 = 1053 (1053)
0.1 - 0.3 = -0.2 (-0.19999999999999998)
53 - -1000 = 1053 (1053)
0.3 * 0.0003 = 0.00009 (0.00008999999999999999)
100 * 25 = 2500 (2500)
0.9 / 0.03 = 30 (30.000000000000004)
100 / 50 = 2 (2)
From my point of view, the idea here is to round the fp number in order to have a nice/short default string representation.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation,
and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits,
and then converted back to double-precision representation, the final result must match the original number.
...
With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955). The bits are laid out as follows ... wikipedia
(0.1).toPrecision(100) ->
0.1000000000000000055511151231257827021181583404541015625000000000000000000000000000000000000000000000
(0.1+0.2).toPrecision(100) ->
0.3000000000000000444089209850062616169452667236328125000000000000000000000000000000000000000000000000
Then, as far as I understand, we can round the value up to 15 digits to keep a nice string representation.
10**Math.floor(53 * Math.log10(2)) // 1e15
eg.
Math.round((0.2+0.1) * 1e15 ) / 1e15
0.3
(Math.round((0.2+0.1) * 1e15 ) / 1e15).toPrecision(100)
0.2999999999999999888977697537484345957636833190917968750000000000000000000000000000000000000000000000
The function would be:
function roundNumberToHaveANiceDefaultStringRepresentation(num) {
const integerDigits = Math.floor(Math.log10(Math.abs(num))+1);
const mult = 10**(15-integerDigits); // also consider integer digits
return Math.round(num * mult) / mult;
}
Have a look at Fixed-point arithmetic. It will probably solve your problem, if the range of numbers you want to operate on is small (eg, currency). I would round it off to a few decimal values, which is the simplest solution.
You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point values). So there isn't an elegant solution unless you use arbitrary precision arithmetic types or a decimal based floating point type. For example, the Calculator app that ships with Windows now uses arbitrary precision arithmetic to solve this problem.
You are right, the reason for that is limited precision of floating point numbers. Store your rational numbers as a division of two integer numbers and in most situations you'll be able to store numbers without any precision loss. When it comes to printing, you may want to display the result as fraction. With representation I proposed, it becomes trivial.
Of course that won't help much with irrational numbers. But you may want to optimize your computations in the way they will cause the least problem (e.g. detecting situations like sqrt(3)^2).
I had a nasty rounding error problem with mod 3. Sometimes when I should get 0 I would get .000...01. That's easy enough to handle, just test for <= .01. But then sometimes I would get 2.99999999999998. OUCH!
BigNumbers solved the problem, but introduced another, somewhat ironic, problem. When trying to load 8.5 into BigNumbers I was informed that it was really 8.4999… and had more than 15 significant digits. This meant BigNumbers could not accept it (I believe I mentioned this problem was somewhat ironic).
Simple solution to ironic problem:
x = Math.round(x*100);
// I only need 2 decimal places, if i needed 3 I would use 1,000, etc.
x = x / 100;
xB = new BigNumber(x);
You can use library https://github.com/MikeMcl/decimal.js/.
it will help lot to give proper solution.
javascript console output 95 *722228.630 /100 = 686117.1984999999
decimal library implementation
var firstNumber = new Decimal(95);
var secondNumber = new Decimal(722228.630);
var thirdNumber = new Decimal(100);
var partialOutput = firstNumber.times(secondNumber);
console.log(partialOutput);
var output = new Decimal(partialOutput).div(thirdNumber);
alert(output.valueOf());
console.log(output.valueOf())== 686117.1985
Avoid dealing with floating points during the operation using Integers
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";
var Factor2="2220.2";
var Result1=Number(Factor1)+Number(Factor2);
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
I could not find a solution using the built in Number.EPSILON that's meant to help with this kind of problem, so here is my solution:
function round(value, precision) {
const power = Math.pow(10, precision)
return Math.round((value*power)+(Number.EPSILON*power)) / power
}
This uses the known smallest difference between 1 and the smallest floating point number greater than one to fix the EPSILON rounding error ending up just one EPSILON below the rounding up threshold.
Maximum precision is 15 for 64bit floating point and 6 for 32bit floating point. Your javascript is likely 64bit.
Try my chiliadic arithmetic library, which you can see here.
If you want a later version, I can get you one.
Use Number(1.234443).toFixed(2); it will print 1.23
function test(){
var x = 0.1 * 0.2;
document.write(Number(x).toFixed(2));
}
test();

lerp for integers or in fixed point math

Is there an elegant way to do linear interpolation using integers? (To average ADC measurements in microcontroller, ADC measurements are 12bit, microcontroller works fine with 32bit integers). Coefficient f is in [0, 1] range.
float lerp(float a, float b, float f)
{
return a + f * (b - a);
}
Well, since you have so many extra integer bits to spare, a solution using ints would be:
Use an integer for your parameter F, with F from 0 to 1024 instead of a float from 0 to 1. Then you can just do:
(A*(1024-F) + B * F) >> 10
without risk of overflow.
In fact, if you need more resolution in your parameter, you can pick the maximum value of F as any power of 2 up to 2**19 (if you are using unsigned ints; 2**18 otherwise).
This doesn't do a good job of rounding (it truncates instead) but it only uses integer operations, and avoids division by using the shift operator. It still requires integer multiplication, which a number of MCUs don't have hardware for, but hopefully it won't be too bad.

a negative unsigned int?

I'm trying to wrap my head around the truetype specification. On this page, in the section 'cmap' format 4, the parameter idDelta is listed as an unsigned 16-bits integer (UInt16). Yet, further down, a few examples are given, and here idDelta is given the values -9, -18, -27 and 1. How is this possible?
This is not a bug in the spec. The reason they show negative numbers in the idDelta row for the examples is that All idDelta[i] arithmetic is modulo 65536. (quoted from the section just above). Here's how that works.
The formula to get the glyph index is
glyphIndex = idDelta[i] + c
where c is the character code. Since this expression must be modulo 65536, that's equivalent to the following expression if you were using integers larger than 2 bytes :
glyphIndex = (idDelta[i] + c) % 65536
idDelta is a u16, so let's say it had the max value 65535 (0xFFFF), then glyphIndex would be equal to c - 1 since:
0xFFFF + 2 = 0x10001
0x10001 % 0x10000 = 1
You can think of this as a 16 integer wrapping around to 0 when an overflow occurs.
Now remember that a modulo is repeated division, keeping the remainder. Well in this case, since idDelta is only 16 bits, the max amount of divisions a modulo will need to do is 1, since the max value you can get from adding two 16 bit integers is 0x1FFFE , which is smaller than 0x100000. That means that a shortcut is to subtract 65536 (0x10000) instead of performing the modulo.
glyphIndex = (idDelta[i] - 0x10000) + c
And this is what the example shows as the values in the table. Here's an actual example from a .ttf file I've decoded :
I want the index for the character code 97 (lowercase 'a').
97 is greater than 32 and smaller than 126, so we use index 2 of the mappings.
idDelta[2] == 65507
glyphIndex = (65507 + 97) % 65536 === 68 which is the same as (65507 - 65536) + 97 === 68
The definition and use of idDelta on that page is not consistent. In the struct subheader it is defined as an int16, while a little earlier the same subheader is listed as UInt16*4.
It's probably a bug in the spec.
If you look at actual implementations, like this one from perl Tk, you'll see that idDelta is usually given as signed:
typedef struct SUBHEADER {
USHORT firstCode; /* First valid low byte for subHeader. */
USHORT entryCount; /* Number valid low bytes for subHeader. */
SHORT idDelta; /* Constant adder to get base glyph index. */
USHORT idRangeOffset; /* Byte offset from here to appropriate
* glyphIndexArray. */
} SUBHEADER;
Or see the implementation from libpdfxx:
struct SubHeader
{
USHORT firstCode;
USHORT entryCount;
SHORT idDelta;
USHORT idRangeOffset;
};

Decompose integer into two bytes

I'm working on an embedded project where I have to write a time-out value into two byte registers of some micro-chip.
The time-out is defined as:
timeout = REG_a * (REG_b +1)
I want to program these registers using an integer in the range of 256 to lets say 60000. I am looking for an algorithm which, given a timeout-value, calculates REG_a and REG_b.
If an exact solution is impossible, I'd like to get the next possible larger time-out value.
What have I done so far:
My current solution calculates:
temp = integer_square_root (timeout) +1;
REG_a = temp;
REG_b = temp-1;
This results in values that work well in practice. However I'd like to see if you guys could come up with a more optimal solution.
Oh, and I am memory constrained, so large tables are out of question. Also the running time is important, so I can't simply brute-force the solution.
You could use the code used in that answer Algorithm to find the factors of a given Number.. Shortest Method? to find a factor of timeout.
n = timeout
initial_n = n
num_factors = 1;
for (i = 2; i * i <= initial_n; ++i) // for each number i up until the square root of the given number
{
power = 0; // suppose the power i appears at is 0
while (n % i == 0) // while we can divide n by i
{
n = n / i // divide it, thus ensuring we'll only check prime factors
++power // increase the power i appears at
}
num_factors = num_factors * (power + 1) // apply the formula
}
if (n > 1) // will happen for example for 14 = 2 * 7
{
num_factors = num_factors * 2 // n is prime, and its power can only be 1, so multiply the number of factors by 2
}
REG_A = num_factor
The first factor will be your REG_A, so then you need to find another value that multiplied equals timeout.
for (i=2; i*num_factors != timeout;i++);
REG_B = i-1
Interesting problem, Nils!
Suppose you start by fixing one of the values, say Reg_a, then compute Reg_b by division with roundup: Reg_b = ((timeout + Reg_a-1) / Reg_a) -1.
Then you know you're close, but how close? Well the upper bound on the error would be Reg_a, right? Because the error is the remainder of the division.
If you make one of factors as small as possible, then compute the other factor, you'd be making that upper bound on the error as small as possible.
On the other hand, by making the two factors close to the square root, you're making the divisor as large as possible, and therefore making the error as large as possible!
So:
First, what is the minimum value for Reg_a? (timeout + 255) / 256;
Then compute Reg_b as above.
This won't be the absolute minimum combination in all cases, but it should be better than using the square root, and faster, too.

Division, Remainders and only Real Numbers Allowed

Trying to figure out this pseudo code. The following is assumed....
I can only use unsigned and signed integers (or long).
Division returns a real number with no remainder.
MOD returns a real number.
Fractions and decimals are not handled.
INT I = 41828;
INT C = 15;
INT D = 0;
D = (I / 65535) * C;
How would you handle a fraction (or decimal value) in this situation? Is there a way to use negative value to represent the remainder?
In this example I/65535 should be 0.638, however, with the limitations, I get 0 with a MOD of 638. How can I then multiply by C to get the correct answer?
Hope that makes sense.
MOD here would actually return 23707, not 638. (I hope I'm right on that :) )
If you were to switch your order of operations on that last line, you would get the integer answer you're looking for (9, if my calculations are correct)
D = (I * C) / 65535
/* D == 9 */
Is that the answer you're looking for?
Well, one way to handle decimals is this replacement division function. There are numerous obvious downsides to this technique.
ALT DIV (dividend, divisor) returns (decimal, point)
for point = 0 to 99
if dividend mod divisor = 0 return dividend / divisor, point
dividend = divident * 10
return dividend / divisor, 100
Assuming these are the values you're always using for this computation, then I would do something like:
D = I / (65535 / C);
or
D = I / 4369;
Since C is a factor of 65535. This will help to reduce the possibility of overruning the available range of integers (i.e. if you've only got 16 bit unsigned ints).
In the more general case you, if you think there's a risk that the multiplication of I and C will result in a value outside the allowed range of the integer type you're using (even if the final result will be inside that range) you can factor out the GCD of the numerator and denominator as in:
INT I = 41828;
INT C = 15;
INT DEN = 65535;
INT GCDI = GCD(I, DEN);
DEN = DEN / GCDI;
I = I / GCDI;
INT GCDC = GCD(C, DEN);
DEN = DEN / GCDC;
C = C / GCDC;
INT D = (I * C) / DEN;
Where DEN is your denominator (65535 in this case). This will not provide you with the correct answer in all cases, especially if I and C are both mutually prime to DEN and I*C > MAX_INT.
As to the larger question you raise, division of integer values will always loose the decimal component (equivalent to the floor function). The only way to preserve the information contained in what we think of as the "decimal" part is through the remainder which can be derived from the modulus. I highly encourage you to not mix the meanings of these different number systems. Integers are just that integers. If you need them to be floating point numbers, you should really be using floats, not ints. If all you're interested in doing is displaying the decimal part to the user (i.e. you're not really using it for further computation) then you could write a routine to convert the remainder into a character string representing the remainder.

Resources