with naming variables i'd like to be as clear as possible.
a percentage can range from 0 and 100. my public variable only accepts values between 0.0 and 1.0, so naming it a "percentage" can lead to confusion and simply naming it a "value" will not clarify the range limit.
is there a "percent" equivalent or naming convention for variables representing values that range from 0.0 and 1.0?
0.0 to 1.0 is percentage as well. You didn't get your definition right, a percentage range from 0% to 100% or from 0.0 to 1.0. It means the same thing, the % means percent = per cent = per hundred.
The range of 0.0 to 1.0 is normally used in statistics, while the range of 0% to 100% is more found in general life as people can put their mind around it better.
OpenGL uses the "normalized" term for values in the [0, 1] range.
Related
I need to convert a 32 bit floating point value x in the range [0,1] to an 8 bit unsigned integer y in the range [0,255].
A formula I found in some C code is : y = (uint8)(255.99998f*x).
This provides the required conversion, but there is a problem with it.
Conversion of 0.75 yield 191, and conversion of 0.25 yields 63. While 0.75+0.25 = 1, 191+63 = 254 and not the desired 255.
Same problem with 0.5 that is converted into 127. 0.5 + 0.5 = 1 and 127+127= 254 instead of 255.
There is thus a rounding error.
Can this be avoided ? If yes, how ?
You will not be able to represent the closed segment [0.0, 1.0] in an accurate way into the segment [0,255]. The most evident problem is that 0.5 + 0.5 = 1.0 . So if 1.0 is represented by 255, 0.5 cannot be exactly represented.
The real problem is that 32 bits floating point numbers are represented in IEE 754 binary 32 format. So you will find a native injection from the [0.0, 1.0[ semi open segment into the [0,255] one by taking the most representative bits of the binary representation (conveniently shifted) and accepting that at the limit 1.0 would be represented as 256.
Then all fractions where the denominator is a power of 2 are exactly represented: 0.5 is 128, 0.25 is 64, and 0.75 is 192 but trying to nicely map [0.0, 1.0] to [0, 255] is close to finding a nice relation from [0,256] (257 values) into [0,255]...
Same problem with 0.5 that is converted into 127. 0.5 + 0.5 = 1 and 127+127= 254 instead of 255.
No mapping can satisfy this requirement since 255/2 is not representable as an integer. You have to decide what this mapping means to you and what properties it requires, but no mapping to integers can satisfy this.
If you choose a floor mapping as you've shown in your question, then 0.5f->127, in which case your algorithm or program might interpret this to define the range of [0-127] with 128 elements - exactly half of the 256 elements in [0-255], since the remaining range [128-255] also has 128 elements.
If, however, you choose an analytical mapping like
y = round(255*x);
this provides the most accurate numerical value - the value of the output will always be the closest integer to the input value. For a value of 0.5f, this produces 128, which is exactly half of the number of bins in the output range. In this case your algorithm might interpret this as the number of elements in the range which is half of the input range. It's really up to you to design the algorithm and interpretation of the mapping around the limitations imposed by discarding the resolution of a 32-bit float.
Ultimately, [0.0-1.0] is about measuring something and [0-255] is about counting something... only you know what you're measuring and what you're counting so we can't really make this decision for you.
If your application is one which is measurement-like, then round(255*x) will produce the closest integer to the input float - a value of 0.0039062, for example, is within 0.001% of a perfect map to 1, will map to 1.
If your application is one which is counting-like, and you are more interested in equally binning the float values, then a floor mapping (like your original suggestion) will map an equal range of the input to each bin. Using the round equation will leave the 0 bin and the 255 bin mapped to half the range of the rest of the bins. Using a floor mapping produces an equal distribution of the input range to the output bins, but sacrifices numerical precision. The above example of a value of 0.0039062, for example, would map to 0 in this case, even though it's 99.99% of the value you would consider to be 1.
It's entirely up to you to determine which mapping makes sense for your specific application.
So I have to find the density and the width for each of the following class. I have the solution but i am confused on something. I am confused on if the answer is correct or incorrect because some sources are saying uppperLimit - lowerLimit = class Width while some are saying it should be lowerLimit2 - lowerLimit1 = Class width. So please have a look at my data and solution and tell me if i am doing it correctly so i can proceed to find the density of it.
CLASS FREQUENCY
30.0-32.0 8
32.0-33.0 7
33.0-34.0 10
34.0-34.5 25
34.5-35.0 30
35.0-35.5 40
35.5-36.0 45
36.0-50.0 5
My Solution.
We first need to find the class boundaries. In this case, they are 30.0, 32.0, 33.0, 34.0, 34.5, 35.0, 35.5 and 36.0. The class widths are therefore c2 – c1 (i.e., 32.0 – 30.0 = 2.0)
So the class width should be --> 2.0, 1.0, 1.0, 0.5, 0.5, 0.5 and 14.0
Looks to me that you are doing it correctly -- the quantity you want in this case is the width of the bin, which is the distance from the lower bound to the upper bound.
More generally, what you need is the ordinary (Lebesgue) measure of the bin -- your density estimate is essentially comparing the observed mass (i.e. bin count) to the mass of the bin. This generalizes your example to other cases in a natural way. The Lebesgue measure of an interval is just the length of an interval, so that's the right thing whether the intervals touch each other (as in your example) or they don't touch at the endpoints (more generally). Also if you are working in two or more dimensions, the Lebesgue measure of the bin is its area or n-dimensional volume -- therefore in any dimensions, it's easy to know what you need to compute.
My math is a bit elementary so I apologize for any assumptions in advance.
I want to fetch values that exist on a simulated bell curve. I don't want to actually create a bell curve or plot one, I'd just like to use a function that given an input value can tell me the corresponding Y axis value on a hypothetical bell curve.
Here's the full problem statement:
I am generating floating point values between 0.0 and 1.0.
0.50 represents 2.0 on the bell curve, which is the maximum. The values < 0.50 and > 0.50 start dropping on this bell curve, so for example 0.40 and 0.60 are the same and could be something like 1.8. 1.8 is arbitrarily chosen for this example, and I'd like to know how I can tweak this 'gradient'.
Right now Im doing a very crude implementation, for example, for any value > 0.40 and < 0.60 the function returns 2.0, but I'd like to 'smooth' this and gain more 'control' over the descent/gradient
Any ideas how I can achieve this in Go
Gaussian function described here : https://en.wikipedia.org/wiki/Gaussian_function
has a bell-curve shape. Example of implementation :
package main
import (
"math"
)
const (
a = 2.0 // height of curve's peak
b = 0.5 // position of the peak
c = 0.1 // standart deviation controlling width of the curve
//( lower abstract value of c -> "longer" curve)
)
func curveFunc(x float64) float64 {
return a *math.Exp(-math.Pow(x-b, 2)/(2.0*math.Pow(c, 2)))
}
The ContourPlot function in Mathematica automatically gets you a legend and contours with colors on the plot which are uniformly distributed ( for example, blue color from 0.1 to 0.2 function values, green from 0.2 to 0.3 and etc.) In my case, function, that I plot, has a large number of values in the 0.1 to 0.2 and only few from 0.2 to 1. If I want to distinguish better values from 0.1 to 0.2 and make several colors for this section, and make the values from 0.2 to 1 by one color, how should I do this?
I would use the Mathematica function Hue[z] to assign a color to your contours. To do this, you're going to use the option ColorFunction, like this:
ContourPlot[myFunction, {x,-10,10}, {y,-10,10}, ColorFunction -> Function[{f},Hue[g[f]]]]
In this code, g[f] is some function that maps the contour level to a hue (a value between 1 and 255). You said you wanted many values between 0 and 0.2, and only a few between 0.2 and 1, so I would use something like
g[f_] := 100*(5*f)^(1/4)
Obviously you can change this to fit. If this doesn't help, you may need to increase the number of contours, using the option Contours->n, where n is how many you want. Hope this helps!
I have groups of binary string each bit represent a feature in a variable e.g I have a color variable where red blue and green are the features thus if I have 010 --> I have a blue object.
I need to get the center of these objects by calculating a weighted mean example 010 weight's 0.5; 100 weights 0.4 and 001 weights 0.8 [010 *0.5 + 100*0.4 + 001*0.8]/[1.7]
is there a possibility to get a point which represents the center of those points which should had same properties of others points (binary on 3 bits)
thank u in advance for your help
I guess you can use the following approach from cluster analysis: you need to choose metric for your object space (Euclidean, Taxicab or something else) and then for all objects from group (or if cardinality of the set is small - for all possible objects) calculate average distance to all objects from group. Then, you can assume object with a smallest average distance is center of a group.