How can I reliably render bit accurate outputs to a floating-point texture?
I am trying to encode uint values with the intBitsToFloat function within a shader and write the result to an RGBA32F texture. This is meant as a workaround for the inability to alpha blend integer textures, and I, later on, want to decode the float values back to their initial uint value.
This is the shade code used
output_id.r = intBitsToFloat(input_symbol_id);
output_id.g = intBitsToFloat(input_instance_id);
output_id.b = 0.0
output_id.a = alpha > 0.5 ? 1.0 : 0.0;
where input_symbol_id and input_instance_id are the int values I want to encode.
This doesn't seem to work, though. The output for very small values (e.g., intBitsToFloat(1)) always gets truncated to 0.0 when later reading from the output texture via readPixels. Larger values (e.g., 1.0) seem to get passed through just fine.
This is using weblg 2.0.
I'm aware of this question, which describes a similar problem. I am however, already employing the fix described there, and I still only get zero values back.
My code contains some random steps and exponential expression (monotonic expression), which needs to find its root at the end. The "RuntimeWarning: invalid value encountered in double_scalars" appeared occasionally. For example, 3 or 2 times it appeared when I run 5 times. Could you tell me what's going on here? PS: each time I can get the result, but it's just the warning makes me confused.
There are two possible way to solve it, depends on your data.
1.
As you are handling some huge number and exceed the limit of double
To solve this, the method is actually quite mathematical.
First, if and only if (T_data[runs][0])*(np.exp(-(x)*(T_data[runs][1]))) is always smaller than 1.7976931348623157e+308.
As a*e^(-x*b) = e(ln(a)-xb)
Thus, (T_data[runs][0])*(np.exp(-(x)*(T_data[runs][1]))) = np.exp(T_data[runs][0]-(x)*(T_data[runs][1]))
Use np.exp(np.log(T_data[runs][0])-(x)*(T_data[runs][1])) instead.
2.
However, as you said you get the result everytime, it is possible that (T_data[runs][0])*(np.exp(-(x)*(T_data[runs][1]))) is approaching zero, which is too small that double can no longer hold but cause no harm to save as 0.
And you should change your code like this to avoid the warning.
temp = (x)*(T_data[runs][1])) > 709 ? 0 : np.exp(-(x)*(T_data[runs][1]))
exponential += (T_data[runs][0]) * temp
## As ln(1.7976931348623157e+308) ~= 709.78
this is something that has always bugged me when I look at code around the web and in so much of the literature: why do we multiply by 255 and not 256?
sometimes you'll see something like this:
float input = some_function(); // returns 0.0 to 1.0
byte output = input * 255.0;
(i'm assuming that there's an implicit floor going on during the type conversion).
am i not correct in thinking that this is clearly wrong?
consider this:
what range of input gives an output of 0 ? (0 -> 1/255], right?
what range of input gives an output of 1 ? (1/255 -> 2/255], great!
what range of input gives an output of 255 ? only 1.0 does. any significantly smaller value of input will return a lower output.
this means that input is not evently mapped onto the output range.
ok. so you might think: ok use a better rounding function:
byte output = round(input * 255.0);
where round() is the usual mathematical rounding to zero decimal places. but this is still wrong. ask the same questions:
what range of input gives an output of 0 ? (0 -> 0.5/255]
what range of input gives an output of 1 ? (0.5/255 -> 1.5/255], twice as much as for 0 !
what range of input gives an output of 255 ? (254.5/255 -> 1.0), again half as much as for 1
so in this case the input range isn't evenly mapped either!
IMHO. the right way to do this mapping is this:
byte output = min(255, input * 256.0);
again:
what range of input gives an output of 0 ? (0 -> 1/256]
what range of input gives an output of 1 ? (1/256 -> 2/256]
what range of input gives an output of 255 ? (255/256 -> 1.0)
all those ranges are the same size and constitute 1/256th of the input.
i guess my question is this: am i right in considering this a bug, and if so, why is this so prevalent in code?
edit: it looks like i need to clarify. i'm not talking about random numbers here or probability. and i'm not talking about colors or hardware at all. i'm talking about converting a float in the range [0,1] evenly to a byte [0,255] so each range in the input that corresponds to each value in the output is the same size.
You are right. Assuming that valueBetween0and1 can take values 0.0 and 1.0, the "correct" way to do it is something like
byteValue = (byte)(min(255, valueBetween0and1 * 256))
Having said that, one could also argue that the desired quality of the software can vary: does it really matter whether you get 16777216 or 16581375 colors in some throw-away plot?
It is one of those "trivial" tasks which is very easy to get wrong by +1/-1. Is it worth it to spend 5 minutes trying to get the 255-th pixel intensity, or can you apply your precious attention elsewhere? It depends on the situation: (byte)(valueBetween0and1 * 255) is a pragmatic solution which is simple, cheap, close enough to the truth, and also immediately, obviously "harmless" in the sense that it definitely won't produce 256 as output. It's not a good solution if you are working on some image manipulation tool like Photoshop or if you are working on some rendering pipeline for a computer game. But it is perfectly acceptable in almost all other contexts. So, whether it is a "bug" or merely a minor improvement proposal depends on the context.
Here is a variant of your problem, which involves random number generators:
Generate random numbers in specified range - various cases (int, float, inclusive, exclusive)
Notice that e.g. Math.random() in Java or Random.NextDouble in C# return values greater or equal to 0, but strictly smaller than 1.0.
You want the case "Integer-B: [min, max)" (inclusive-exclusive) with min = 0 and max = 256.
If you follow the "recipe" Int-B exactly, you obtain the code:
0 + floor(random() * (256 - 0))
If you remove all the zeros, you are left with just
floor(random() * 256)
and you don't need to & with 0xFF, because you never get 256 (as long as your random number generator guarantees to never return 1).
I think your question is misled. It looks like you start assuming that there is some "fairness rule" that enforces the "right way" of translation. Unfortunately in practice this is not the case. If you want just generate a random color, then you may use whatever logic fits you. But if you do actual image processing, there is no rule that says the each integer value has to be mapped on the same interval on the float value. On the contrary what you really want is a mapping between two inclusive intervals [0;1] and [0;255]. And often you don't know how many real discretization steps there will be in the [0;1] range down the line when the color is actually shown. (On modern monitors there are probable all 256 different levels for each color but on other output devices there might be significantly less choices and the total number might be not a power of 2). And the real mapping rule is that if for two colors red component values are R1 and R2 then proportion of the actual colors' red component brightness should be as close to R1:R2 as possible. And this rule automatically implies multiply by 255 when you want to map onto [0;255] and thus this is what everybody does.
Note that what you suggest is most probably introducing a bug rather than fixing a bug. For example the proportion rules actually means that you can calculate a mix of two colors R1 and R2 with mixing coefficients k1 and k2 as
Rmix = (k1*R1 + k2*R2)/(k1+k2)
Now let's try to calculate 3:1 mix of 100% Red with 100% Black (i.e. 0% Red) two ways:
using [0-255] integers Rmix = (255*3+1*0)/(3+1) = 191.25 ≈ 191
using [0;1] floating range and then converting it to [0-255] Rmix_float = (1.0*3 + 1*0.0)/(3+1) = 0.75 so Rmix_converted_256 = 256*0.75 = 192.
It means your "multiply by 256" logic has actually introduced inconsistency of different results depending on which scale you use for image processing. Obviously if you used "multiply by 255" logic as everyone else does, you'd get a consistent answer Rmix_converted_255 = 255*0.75 = 191.25 ≈ 191.
I need the logic for the following situation. I am clueless in doing this.
Consider for January I have 10$ revenue and for February I have 20$ revenue.
My growth would be ((20-10)/10)*100% = 100%
If I have 0$ revenue for March.
Then my growth would be ((0-10)/10)*100 % =-100 %. Should I call this as negative percentage? (I know its deflation)
Going one step further,
If now I have 20$ revenue for April.
How can I calculate the growth now?, sure the following formula is wrong, ((20-0)/0)*100 %= ?????
My basic questions are
Is there a better solution to find growth rate, other than the one above?
If I use the aforementioned formula, should I take some value as reference? or this is wrong also?
This is most definitely a programming problem. The problem is that it cannot be programmed, per se. When P is actually zero then the concept of percentage change has no meaning. Zero to anything cannot be expressed as a rate as it is outside the definition boundary of rate. Going from 'not being' into 'being' is not a change of being, it is instead creation of being.
If you're required to show growth as a percentage it's customary to display [NaN] or something similar in these cases. A growth rate, on the other hand, would be reported in this case as $/month. So in your example for April the growth rate would be calculated as ((20-0)/1.
In any event, determining the correct method for reporting this special case is a user decision. Is it covered in your user requirements?
There is no rate of growth from 0 to any other number. That is to say, there is no percentage of increase from zero to greater than zero and there is no percentage of decrease from zero to less than zero (a negative number). What you have to decide is what to put as an output when this situation happens. Here are two possibilities I am comfortable with:
Any time you have to show a rate of increase from zero, output the infinity symbol (∞). That's Alt + 236 on your number pad, in case you're wondering. You could also use negative infinity (-∞) for a negative growth rate from zero.
Output a statement such as "[Increase/Decrease] From Zero" or something along those lines.
Unfortunately, if you need the growth rate for further calculations, the above options will not work, but, on the other hand, any number would give your following calculations incorrect data any way so the point is moot. You'd need to update your following calculations to account for this eventuality.
As an aside, the ((New-Old)/Old) function will not work when your new and old values are both zero. You should create an initial check to see if both values are zero and, if they are, output zero percent as the growth rate.
How to deal with Zeros when calculating percentage changes is the researcher's call and requires some domain expertise. If the researcher believes that it would not be distorting the data, s/he may simply add a very small constant to all values to get rid of all zeros. In financial series, when dealing with trading volume, for example, we may not want to do this because trading volume = 0 just means that: the asset did not trade at all. The meaning of volume = 0 may be very different from volume = 0.00000000001.
This is my preferred strategy in cases whereby I can not logically add a small constant to all values. Consider the percentage change formula ((New-Old)/Old) *100. If New = 0, then percentage change would be -100%. This number indeed makes financial sense as long as it is the minimum percentage change in the series (This is indeed guaranteed to be the minimum percentage change in the series). Why? Because it shows that trading volume experiences maximum possible decrease, which is going from any number to 0, -100%. So, I'll be fine with this value being in my percentage change series. If I normalize that series, then even better since this (possibly) relatively big number in absolute value will be analyzed on the same scale as other variables are. Now, what if the Old value = 0. That's a trickier case. Percentage change due to going from 0 to 1 will be equal to that due to going from 0 to a million: infinity. The fact that we call both "infinity" percentage change is problematic. In this case, I would set the infinities equal to np.nan and interpolate them.
The following graph shows what I discussed above. Starting from series 1, we get series 4, which is ready to be analyzed, with no Inf or NaNs.
One more thing: a lot of the time, the reason for calculating percentage change is to stationarize the data. So, if your original series contains zero and you wish to convert it to percentage change to achieve stationarity, first make sure it is not already stationary. Because if it is, you don't have to calculate percentage change. The point is that series that take the value of 0 a lot (the problem OP has) are very likely to be already stationary, for example the volume series I considered above. Imagine a series oscillating above and below zero, thus hitting 0 at times. Such a series is very likely already stationary.
There are a couple of things to consider.
If your growth is 0 for that month, it'd be 0 in change from 0. So it is meaningful in that sense. You could adjust by adding a small number, so it'd be change from 0.1 to 0.1. Then change and percentage change would be 0 and 0%.
Then to think about case where you change from 0 to 20. Such practice would result in massive reporting issues. Depending on what small number you choose to add, eg if you use 0.1 or 0.001, your percentage change would be 100 fold difference. So there is a problem with such practice.
It is possible however if you have a change from 1 to 20, then the %change would be 19/1=1900%. Here the % change doesn't make too much sense when you start off so low, it becomes very sensitive to any change and may skew your results if other data points are on different scale.
So it is important to understand your data, and in this case, how frequent you encounter 0s and extreme numbers in your data.
You can add 1 to each
example
New = 5;
old = 0;
(1+new) - (old+1) / (old +1)
5/ 1 * 100 ==> 500%
It should be (new minus old)/mod avg of old and new
With a special case when both val are zeros
When both values are zero, then the change is zero.
If one of the values is zero, it's infinite (ambiguous), but I would set it to 100%.
Here is a C++ code (where v1 is the previous value (old), and v2 is new):
double result = 0;
if (v1 != 0 && v2 != 0) {
// If values are non-zero, use the standard formula.
result = (v2 / v1) - 1;
} else if (v1 == 0 || v2 == 0) {
// Change is zero when both values are zeros, otherwise it's 100%.
result = v1 == 0 && v2 == 0 ? 0 : 1;
}
result = v2 > v1 ? abs(result) : -abs(result);
// Note: To have format in hundreds, multiply the result by 100.
function percentChange(initialValue, finalValue) {
if (initialValue == 0 && finalValue == 0) {
return 0
} else if (initialValue != 0 && finalValue != 0) {
return (finalValue / initialValue) - 1
} else {
return finalValue > initialValue ? 1 : -1
}
}
It should be float('inf') * new_value.
That would be make it better when comparison as comparison with float('NaN') will always return False except != as it is doesn't match with any value.
use below code, as this is 100% growth rate in case of 0 to any number :
IFERROR((NEW-OLD)/OLD,100%)
I am trying to change the value of upper bound in For loop ,but the Loop is running till the upper bound which was defined in the starting.
According to logic loop should go infinite, since value of v_num is always one ahead of i,But loop is executing three time.Please explain
This is the code
DECLARE
v_num number:=3;
BEGIN
FOR i IN 1..v_num LOOP
v_num:=v_num+1;
DBMS_OUTPUT.PUT_LINE(i ||' '||v_num);
END LOOP;
END;
Ouput Coming
1 4
2 5
3 6
This behavior is as specified in the documentation:
FOR-LOOP
...
The range is evaluated when the FOR loop is first entered and is never re-evaluated.
(Oracle Documentation)
Generally, FOR loops would be fixed iterations
For indeterminate looping, use WHILE
This isn't Oracle specific, and why there are separate looping constructs.
While it is generally considered a bad idea to change the loop variable's value, sometimes it seems like the only way to go. However, you might find that loops are optimized, and that might be what is happening here.
There's nothing preventing the language designers from saying "The upper bound of the for loop is evaluated only once". This appears to be the rule that plsql is following here.