How to Constrain the slope to be positive in regression? [closed] - r

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
How can I constrain my regression coefficient (only the slope, not the intercept) to be positive? It's a general statistical question, but specifically, I would like to have an r solution, and even more specifically when using model 2 regression (major axis regression).

You could do linear regression with nls, and limit the paramater range there.
Example: Using the nl2sol algorithm from the Port library we want to find a data set with x and y values with a negative Y-intercept and slope between 1.5 and 1.6:
nls(y~a+b*x,algorithm="port",start=c(a=0,b=1.5),lower=c(a=-Inf,b=1.4),upper=c(a=Inf,b=1.6))
This solution and others are explained in the more general question at https://stats.stackexchange.com/questions/61733/linear-regression-with-slope-constraint

Related

Calculating AWE from mclust package [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to calculate the Approximate Weight of Evidence (AWE) from information obtained via the mclust R package?
According to R documentation, you should have access to function awe(tree, data) since version R1.1.7.
From the example on the linked page (in case of broken link),
data(iris)
iris.m _ iris[,1:4]
awe.val <- awe(mhtree(iris.m), iris.m)
plot(awe.val)
Following the formula from Banfield, J. and Raftery, A. (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics, 49, 803-821. -2*model$loglik + model$d*(log(model$n)+1.5) Where model represents the model with number of cluster solutions selected. Keeping this question in the hope that it may help someone in the future.

Interpretation of ACF plot [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Need help on interpreting the acf plot(sin graph pattern)
May be you will need to examine the PACF, you have a large peak in the first lag, followed by a decreasing wave that alternates between positive and negative correlations. Which can mean an autoregressive term of higher order in the data.
Use the partial autocorrelation function to determine the order of the autoregressive term.

interpreting posterior distribution in JAGS [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Summing up the posterior probabilities of a discrete distribution gives a value of more than one. Where am I going wrong?
This is the posterior generated by jags
My guess is that the histogram is supposed to be interpreted as a density function, and the probability mass of each bar is therefore the width of the bar times the height of the bar.
Given that interpretation, it looks like the masses sum to approximately 1. The width of each bar appears to be 1/2 and the sum of the heights is about 2 (to judge by eyeball).
If that's not it, you'll have to give more information e.g. show your R script and any data.

Regression of a complex model in R [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
This is just a simplified version of what I have to do.
I have to perform a linear mixed effects regression for the following model:
y~a1+F1+F2+(1|Effect1)+(1|Effect2)
The problem I have is coding the fixed effects.
F1=a2*(M-M1)+a4*(10-M) if M >=M1
=a3*(M-M2)+a4*(10-M1) if M < M1
F2=a5*ln(R+a6)+a7*lnV if V<100
=a5*1.5+a7 if V>=100
M,R,V,y,M1 & M2 are given and so are the Effects1 & Effects2. Also it is a large data set.
I have to find the regression coefficients a1,a2,a3,a4,a5,a6,a7. Is there a way to do this in R?Like specifying the coefficients as variables or any other approach.
Edit: I have removed the F3 term and modified the F2 term, because it was causing a lot of confusion and I am mainly concerned about the first 2 terms

How are loess and locfit different? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
'loess' is implemented in R 'stats' package and 'locfit' in 'locfit' package. They are both nonparametric regression methods that uses local regression. What are the difference between two methods?
Based on this introduction it would appear that locfit is a generalization of loess and in fact you can obtain a loess fit using locfit, but locfit also has additional options to fit more general models including logistic regression style fits and general density estimation. It can also fit loess style models, but using a different weighting formula or even varying bandwidth.

Resources