Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
How do I get the coefficient of a regression in R? In the equation of the Regression line, where y = a + bx, the Angular coefficient would be the "b". I would also like to know how to calculate the intercept - the "a" of the equation
I only speak enough spanish to understand a little bit, try:
Model <- lm(y ~ 1 + x)
summary(Model)$coefficients
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
With respect to bayesian curve fitting, eq 1.68 of Bishop - Pattern recognition
How is the following result derived :
p(t|x, x, t) = Integration{ p(t|x, w)p(w|x, t) } dw
Lets just consider a simpler case using the Law of total probability.
If w1, w2 are disjoint events then
p(A) = p(A|w1) p(w1) + p(A|w2) p(w2)
we can extend this to any number of items
p(A) = sum_{wi} p(A|wi) p(wi)
or indeed take the limit
p(A) = int_{w} p(A|w) p(w) dw
We can make A depend on another independent event B that the w's might depend on
p(A|B) = int_{w} p(A|w) p(w|B) dw
or an event C which the w's do not depend on
p(A|B,C) = = int_{w} p(A|w,C) p(w|B) dw
which is just your formula with different variables.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
How can I create a normal probability plot of residuals in R so that there are normal probability values on y-axis?
Normally you'll make the normal probability plot with qqnorm and qqline.
Example:
fit <- lm(resp ~ dep1 + dep2)
qqnorm(fit$residuals, datax=TRUE)
qqline(fit$residuals, datax=TRUE)
You can get residuals vs. prob. with the plot and pnorm:
plot(fit$residuals, pnorm(fit$residuals))
(with prob. on the y-axis)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Whats is the fastest way to find the maximum root of a cubic function in R?
a x^3 + b x^2 + c x + d = 0
Is there anything wrong with the base function polyroot?
Description
Find zeros of a real or complex polynomial.
An example of a cubic
polyroot(c(1,3,3,1))
# [1] -1+0i -1+0i -1-0i
Here is a function to find the maximum non-complex root of a polynomial...
maxReal <- function(params){
x <- polyroot(params)
reals <- sapply(x, function(i) isTRUE(all.equal(Im(i),0)))
max(Re(x)[reals])
}
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
A reviewer of a paper I submitted to a scientific journal insists that my function
f1[b_, c_, t_] := 1 - E^((c - t)/b)/2
is "mathematically equivalent" to the function
f2[b0_, b1_, t_] := 1 - b0 E^(-b1 t)
He insists
While the models might appear(superficially) to be different, the f1
model is merely a re-parameterisation of the f2 model, and this can be
seen easily using highschool mathematics.
I survived High School, but I don't see the equivalence, and FullSimplify does not yield the same results. Perhaps I am misunderstanding FullSimplify. Is there a way to authoritatively refute or confirm the assertion of the reviewer?
If c and b are constant, you can factor them out relatively easily given the property of the power operator:
e^(A + B) = e^A x e^B...
so
e^((c - t)/b) = e^(c/b - t/b) = e^(c/b) x e^(-t/b) = b0 x e^(-t/b)
The latter expression is commonly used to simplify linear differential equation.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I fit this equation to a set of data (x,y)
y = a x ^ b + c
I tried least square error method (in Maple software) and power low but it doesn't work!
I solved it!
Y = a x ^ b ; Y = y-c
so 'a' and 'b' can be obtained from least square method, if 'c' is known. Therefor error can be calculate as a function of 'c'. (only 'c' and not 'a' or 'b')
No I should minimize the square error witch is a function of only 'c'.
It's possible to solve this by a numeric method.