Formula to convert Gram/Kilo to Gram/Pound - math

I am looking for the formula to convert Gram/Kilo to Gram/Pound
Example:
1 g/kg = 0.45 g/lb
Here is a calculator to do it on Google:
https://www.google.co.nz/search?safe=off&espv=2&q=gram+per+kilo+to+gram+per+pound
Does anyone know the formula?

You practically have the formula in the question:
g/kg * 0.45 = g/lb.

Related

Get quantile for each value

Is there an implemented (!) function in R which gives you the empirical quantile for each value? I couldn't find any ...
Let's say we have x
x = c(1,3,4,2)
I want to have the quantile of each element.
[1] 0.25, 0.75, 1, 0.5
Thank you very much!
You can use the ecdf() function:
ecdf(x)(x)
[1] 0.25 0.75 1.00 0.50
ecdf(x) creates a function, and you pass the elements of x to that function. The syntax admittedly looks strange

How to calculate f(2) measure?

I'm trying to calculate f(0.5) and f(2) for this set of data:
Precision: 0.6
Recall: 0.45
My results:
f(1) : 0.51
f(0.5): 0.56 (wrong)
f(2) : 0.47 (wrong)
I calculated f(1) Measure using this formula
(2x(PxR))/(P+R)
But when I try to calculate f(2) or f(0.5) my results are slightly off
f(0.5) should be 0.54
f(2) should be 0.49
I used the following formula:
(b^2 + 1) x ((P x R)/(b^2)+R)
b = the f measure I'm using, either 0.5 or 2
What am I doing wrong?
And if possible, could someone calculate the f(0.5) and f(2) measure for me and confirm that I am wrong?
Any help is appreciated, will do my best to make this question as clear as possible. Please leave a comment if it's not clear enough and I will try to add to it
Thanks
Fortunately, Wikipedia is searchable
The correct equation (on the Wikipedia page it has real math formatting, which is easier to read) is:
F(β)=(1+β2)⋅(PR/(β2P+R))
Or in Python:
>>> def F(beta, precision, recall):
... return (beta*beta + 1)*precision*recall / (beta*beta*precision + recall)
...
>>> F(1, .6, .45)
0.5142857142857143
>>> F(2, .6, .45)
0.4736842105263158
>>> F(0.5, .6, .45)
0.5625000000000001
That looks pretty close to the values you are getting, and not very similar to the ones you say are "correct". So it seems worth asking "Where do the supposedly correct values come from?"

Error in Weibull distribution

file.data has the following values to fit with Weibull distribution,
x y
2.53 0.00
0.70 0.99
0.60 2.45
0.49 5.36
0.40 9.31
0.31 18.53
0.22 30.24
0.11 42.23
Following the Weibull distribution function f(x)=1.0-exp(-lambda*x**n), it is giving error:
fit f(x) 'data.dat' via lambda, n
and finally plotting f(x) and xy graph have large discrepancy.
Any feedback would be highly appreciated. Thanks!
Several things:
You must skip the first line (if it really is x y).
You must use the correct function (the pdf and not the CDF, see http://en.wikipedia.org/wiki/Weibull_distribution, like you did in https://stackoverflow.com/q/20336051/2604213)
You must use an additional scaling parameter, because your data are not normalized
You must select adequate initial values for the fitting.
The following works fine:
f(x) = (x < 0 ? 0 : a*(x/lambda)**(n-1)*exp(-(x/lambda)**n))
n = 0.5
a = 100
lambda = 0.15
fit f(x) 'data.dat' every ::1 via lambda, n, a
set encoding utf8
plot f(x) title sprintf('λ = %.2f, n = %.2f', lambda, n), 'data.dat' every ::1
That gives (with 4.6.4):
If that's the actual command you provided to gnuplot, it won't work because you haven't yet defined f(x).

KS test for power law

Im attempting fitting a powerlaw distribution to a data set, using the method outlined by Aaron Clauset, Cosma Rohilla Shalizi and M.E.J. Newman in their paper "Power-Law Distributions in Empirical Data".
I've found code to compare to my own, but im a bit mystified where some of it comes from, the story thus far is,
to identify a suitable xmin for the powerlaw fit, we take each possible xmin fit a powerlaw to that data and then compute the corresponding exponet (a) then the KS statistic (D) for the fit and the observed data, then find the xmin that corresponds to the minimum of D. The KS statistic if computed as follows,
cx <- c(0:(n-1))/n # n is the sample size for the data >= xmin
cf <- 1-(xmin/z)^a # the cdf for a powerlaw z = x[x>=xmin]
D <- max(abs(cf-cx))
what i dont get is where cx comes for, surely we should be comparing the distance between the empirical distributions and the calculated distribution. something along the lines of:
cx = ecdf(sort(z))
cf <- 1-(xmin/z)^a
D <- max(abs(cf-cx(z)))
I think im just missing something very basic but please do correct me!
The answer is that they are (almost) the same. The easiest way to see this is to generate some data:
z = sort(runif(5,xmin, 10*xmin))
n = length(x)
Then examine the values of the two CDFs
R> (cx1 = c(0:(n-1))/n)
[1] 0.0 0.2 0.4 0.6 0.8
R> (cx2 = ecdf(sort(z)))
[1] 0.2 0.4 0.6 0.8 1.0
Notice that they are almost the same - essentially the cx1 gives the CDF for greater than or equal to whilst cx2 is greater than.
The advantage of the top approach is that it is very efficient and quick to calculate. The disadvantage is that if your data isn't truly continuous, i.e. z=c(1,1,2), cx1 is wrong. But then you shouldn't be fitting your data to a CTN distribution if this were the case.

Mathematical formula conversion

Ok. I have formula for find "d" but i need new formula to find "A".
d=sqrt(((1.27*A)/coef))
For example i have:
d=0.36
coef=10
I need A and A in new formula must be 1.
Think it's school level question.
d=sqrt(((1.27*A)/coef))
d^2 = (1.27*A)/coef
coef*d^2 = (1.27*A)
coef*d^2/1.27 = A
A = coef*d^2/1.27
here you go:
A= (coef * d * d)/ 1.27
Your equation says to take A and...
Multiply by 1.27
Divide by coef
Take the square root
... to get d. So, to get A starting from d you need to work backwards, undoing each step:
Square
Multiply by coef
Divide by 1.27
Or, in equation form, A = d^2 * coef / 1.27.
The math used to read and manipulate equations like this is called "Algebra".

Resources