Solving simultaneous equations with R - r

Suppose I have the following equations:
x + 2y + 3z = 20
2x + 5y + 9z = 100
5x + 7y + 8z = 200
How do I solve these equations for x, y and z? I would like to solve these equations, if possible, using R or any other computer tools.

This should work
A <- matrix(data=c(1, 2, 3, 2, 5, 9, 5, 7, 8), nrow=3, ncol=3, byrow=TRUE)
b <- matrix(data=c(20, 100, 200), nrow=3, ncol=1, byrow=FALSE)
round(solve(A, b), 3)
[,1]
[1,] 320
[2,] -360
[3,] 140

For clarity, I modified the way the matrices were constructed in the previous answer.
a <- rbind(c(1, 2, 3),
c(2, 5, 9),
c(5, 7, 8))
b <- c(20, 100, 200)
solve(a, b)
In case we need to display fractions:
library(MASS)
fractions(solve(a, b))

Another approach is to model the equations using lm as follows:
lm(b ~ . + 0,
data = data.frame(x = c(1, 2, 5),
y = c(2, 5, 7),
z = c(3, 9, 8),
b = c(20, 100, 200)))
which produces
Coefficients:
x y z
320 -360 140
If you use the tibble package you can even make it read just like the original equations:
lm(b ~ . + 0,
tibble::tribble(
~x, ~y, ~z, ~b,
1, 2, 3, 20,
2, 5, 9, 100,
5, 7, 8, 200))
which produces the same output.

A <- matrix(data=c(1, 2, 3, 2, 5, 9, 5, 7, 8),nrow=3,ncol=3,byrow=TRUE)
b <- matrix(data=c(20, 100, 200),nrow=3,ncol=1,byrow=FALSE)
solve(A)%*% b
Note that this is a square matrix!

Related

How to mirror the outer positions with the variable with R

I have a data frame:
tes <- data.frame(x = c(1, 1, 1, 2, 2, 2, 3, 3, 3),
y = c(1, 2, 3, 1, 2, 3, 1, 2, 3),
d = c(10, 20, 30, 100, 11, 12, 403, 43, 21))
They look like this on the plot
ggplot(aes(x = x, y = y), data = tes) + geom_point(aes(color = factor(d)), size = 5)
I'd like to "mirror the outer rows in this data to obtain such data and plot
tes1 <- data.frame(x = c(0, 0, 0, 0,0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4),
y = c(0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4),
d = c(10, 10, 20, 30, 30, 10, 10, 20, 30, 30, 100, 100, 11, 12, 12, 403, 403, 43, 21, 21, 403, 403, 43, 21, 21))
ggplot(aes(x = x, y = y), data = tes1) + geom_point(aes(color = factor(d)), size = 4)
Does this do what you're after?
Explanation: We first convert tes into a flattened table with ftable(xtabs(...). Then we simply replicate the first and last column, and first and last row. We then give new column and row names to reflect the extra "flanking" rows and columns, and finally convert back to a long dataframe with data.frame(table(...))
# Convert to table then matrix
m <- ftable(xtabs(d ~ x + y, data = tes));
class(m) <- "matrix";
# Replicate first and last column/row by binding to the beginning
# and end, respectively of the matrix
m <- cbind(m[, 1], m, m[, ncol(m)]);
m <- rbind(m[1, ], m, m[nrow(m), ]);
# Set column/row names
rownames(m) <- seq(min(tes$x) - 1, max(tes$x) + 1);
colnames(m) <- seq(min(tes$y) - 1, max(tes$y) + 1);
# Convert back to long dataframe
tes.ext <- data.frame(as.table(m));
colnames(tes.ext) <- colnames(tes);
# Plot
ggplot(aes(x = x, y = y), data = tes.ext) + geom_point(aes(color = factor(d)), size = 5)
Data
tes <- data.frame(x = c(1, 1, 1, 2, 2, 2, 3, 3, 3),
y = c(1, 2, 3, 1, 2, 3, 1, 2, 3),
d = c(10, 20, 30, 100, 11, 12, 403, 43, 21))

Manually coded Poisson log likelihood function returns a different result from glm for interactive models

I've coded my own Poisson likelihood function, but it is returning values that are significantly different from glm for a model with an interaction for a specific data. Notice that the function spits out exactly the same result as glm from all other data I've tried, as well as for the model without the interaction for this data.
> # Log likelihood function
> llpoi = function(X, y){
+ # Ensures X is a matrix
+ if(class(X) != "matrix") X = as.matrix(X)
+ # Ensures there's a constant
+ if(sum(X[, 1]) != nrow(X)) X = cbind(1, X)
+ # A useful scalar that I'll need below
+ k = ncol(X)
+ ## Function to be maximized
+ FUN = function(par, X, y){
+ # beta hat -- the parameter we're trying to estimate
+ betahat = par[1:k]
+ # mu hat -- the systematic component
+ muhat = X %*% betahat
+ # Log likelihood function
+ sum(muhat * y - exp(muhat))
+ }
+ # Optimizing
+ opt = optim(rep(0, k), fn = FUN, y = y, X = X, control = list(fnscale = -1), method = "BFGS", hessian = T)
+ # Results, including getting the SEs from the hessian
+ cbind(opt$par, sqrt(diag(solve(-1 * opt$hessian))))
+ }
>
> # Defining inputs
> y = c(2, 2, 1, 1, 1, 1, 1, 2, 2, 1, 2, 2, 2, 1, 1, 3, 1, 1, 3, 2, 2, 2, 3, 1, 2, 4, 3, 3, 3, 1, 3, 0, 2, 1, 2, 4, 1, 2, 0, 2, 1, 2, 1, 4, 1, 2, 0)
> x1 = c(8, 1, 0, 3, 3, 3, 5, 4, 0.4, 1.5, 2, 1, 1, 7, 2, 3, 0, 2, 1.5, 5, 1, 4, 5.5, 6, 3, 3, 2, 0.5, 5, 10, 3, 22, 20, 3, 20, 10, 15, 25, 15, 6, 3.5, 5, 18, 2, 15.0, 16, 24)
> x2 = c(12, 12, 12, 16, 12, 12, 12, 12, 12, 12, 12, 12, 9, 9, 12, 9, 12, 12, 9, 16, 9, 6, 12, 9, 9, 12, 12, 12, 12, 14, 14, 14, 9, 12, 9, 12, 3, 12, 9, 6, 12, 12, 12, 12, 12, 12, 9)
>
> # Results
> withmyfun = llpoi(cbind(x1, x2, x1 * x2), y)
> round(withmyfun, 2)
[,1] [,2]
[1,] 0.96 0.90
[2,] -0.05 0.09
[3,] -0.02 0.08
[4,] 0.00 0.01
> withglm = glm(y ~ x1 + x2 + x1 * x2, family = "poisson")
> round(summary(withglm)$coef[, 1:2], 2)
Estimate Std. Error
(Intercept) 1.08 0.90
x1 -0.07 0.09
x2 -0.03 0.08
x1:x2 0.00 0.01
Is this something data specific? Is it inherent to the
optimization process, which will eventually diverge more significantly from glm and I just got unlucky with this data? Is it a function of using method = "BFGS" for optim?
By rescaling the right-hand side variables, the outcome improves a lot.
> library(data.table)
> setDT(tmp)
> tmp[, x1 := scale(x1)][, x2 := scale(x2)]
>
>
> withmyfun = with(tmp, llpoi(cbind(x1, x2, x1 * x2), y))
> withmyfun
[,1] [,2]
[1,] 0.57076392 0.1124637
[2,] -0.19620040 0.1278070
[3,] -0.01509032 0.1169019
[4,] 0.05636459 0.1380611
>
> withglm = glm(y ~ x1 + x2 + x1 * x2, family = "poisson", data = tmp)
> summary(withglm)$coef[, 1:2]
Estimate Std. Error
(Intercept) 0.57075132 0.1124641
x1 -0.19618199 0.1278061
x2 -0.01507467 0.1169034
x1:x2 0.05636934 0.1380621
>
So, my recommendation is, inside llpoi, to have a procedure to normalize the variables before using optim to the data and rescale the estimates based before the function returns the value. Your example data have too big range, which results in very small estimates of coefficients. This problem gets worse because of the relatively flat likelihood surface because of insignificant variables.
Note:
You can get very close outputs from this except for the intercept. What I meant by standardizing is something like that.
llpoi = function(X, y){
# Ensures X is a matrix
if(class(X) != "matrix") X = as.matrix(X)
# Ensures there's a constant
if(sum(X[, 1]) != nrow(X)) X = cbind(1, X)
# A useful scalar that I'll need below
avgs <- c(0, apply(X[, 2:ncol(X)], 2, mean))
sds <- c(1, apply(X[, 2:ncol(X)], 2, sd))
X<- t((t(X) - avgs)/sds)
k = ncol(X)
## Function to be maximized
FUN = function(par, X, y){
# beta hat -- the parameter we're trying to estimate
betahat = par[1:k]
# mu hat -- the systematic component
muhat = X %*% betahat
# Log likelihood function
sum(muhat * y - exp(muhat))
}
# Optimizing
opt = optim(rep(0, k), fn = FUN, y = y, X = X, control = list(fnscale = -1), method = "BFGS", hessian = T)
# Results, including getting the SEs from the hessian
cbind(opt$par, sqrt(diag(solve(-1 * opt$hessian))))/sds
}
After much research, I learned that the two results differ because glm.fit, the workhorse behind glm optimizes the function through Newton-Raphson method, while I used BFGS in my llpoi function. BFGS is faster, but less precise. The two results will be very similar on most cases, but may differ more significantly when the surface area is too flat or has too many maxima, as correctly pointed out by amatsuo_net, because the climbing algorithm used by BFGS will get stuck.

How to calculate Euclidian distance between two points stored in rows of two separate matrixes?

I have two matrixes:
I would like to count the distance between point X and point Y without using a loop and in the way that when the matrix is expanded by additional columns the expression/function works.
For validation one could use:
sqrt((m1[,1] - m2[,1])^2 + (m1[,2] - m2[,2])^2 + (m1[,3] - m2[,3])^2 + (m1[,4] - m2[,4])^2 + (m1[,5] - m2[,5])^2)
The expression above gives the correct result for the distance between X and Y however once the matrix is expanded by additional columns the expression has also to be expanded and that is an unacceptable solution...
Would you be so kind and tell how to achieve this? Any help would be more than welcome. I'm stuck with this one for a while...
- between matrix is element-wise in R and the rowSums is useful for calculating the sum of along the row:
m1 <- matrix(
c(4, 3, 1, 6,
2, 4, 5, 7,
9, 0, 1, 2,
6, 7, 8, 9,
1, 6, 4, 3),
nrow = 4
)
m2 <- matrix(
c(2, 6, 3, 2,
9, 4, 1, 4,
1, 3, 0, 1,
4, 5, 0, 2,
7, 2, 1, 3),
nrow = 4
)
sqrt((m1[,1] - m2[,1])^2 + (m1[,2] - m2[,2])^2 + (m1[,3] - m2[,3])^2 + (m1[,4] - m2[,4])^2 + (m1[,5] - m2[,5])^2)
# [1] 12.529964 6.164414 9.695360 8.660254
sqrt(rowSums((m1 - m2) ^ 2))
# [1] 12.529964 6.164414 9.695360 8.660254

How does one calculate LD50 from a glmer?

I am analyzing a data set where ~10 individuals are exposed to a set treatment (Time) and mortality is recorded (Alive, Dead). glmer was used to model the data because Treatments were blocked (Trial).
From the following model I want to predict the Time at which 50% of individuals die.
Trial <- c(1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3)
Time <- c(2, 6, 9, 12, 15, 18, 21, 24, 1, 2, 3, 4, 5, 6, 1.5, 3, 4.5, 6, 39)
Alive <- c(10, 0, 0, 0, 0, 0, 0, 0, 6, 2, 8, 1, 0, 0, 4, 6, 1, 2, 0)
Dead <- c(0, 10, 6, 10, 10, 10, 7, 10, 0, 8, 1, 9, 10, 10, 5, 0, 8, 6, 10)
ostrinaA.glmm<- glmer(cbind(Alive, Dead)~Time+(1|Trial), family = binomial(link="logit"))
summary(ostrinaA.glmm)
If I was simply modelling using glmthe dose.p function from MASS could be used. From a different forum I found generalized code for a dose.p.glmm from Bill Pikounis. It is as follows:
dose.p.glmm <- function(obj, cf = 1:2, p = 0.5) {
eta <- obj$family$linkfun(p)
b <- fixef(obj)[cf]
x.p <- (eta - b[1L])/b[2L]
names(x.p) <- paste("p = ", format(p), ":", sep = "")
pd <- -cbind(1, x.p)/b[2L]
SE <- sqrt(((pd %*% vcov(obj)[cf, cf]) * pd) %*% c(1, 1))
res <- structure(x.p, SE = SE, p = p)
class(res) <- "glm.dose"
res
}
I'm new to coding and need help adjusting this code for my model. My attempt is as follows:
dose.p.glmm <- function(ostrinaA.glmm, cf = 1:2, p = 0.5) {
eta <- ostrinaA.glmm$family$linkfun(p)
b <- fixef(ostrinaA.glmm)[cf]
x.p <- (eta - b[1L])/b[2L]
names(x.p) <- paste("p = ", format(p), ":", sep = "")
pd <- -cbind(1, x.p)/b[2L]
SE <- sqrt(((pd %*% vcov(obj)[cf, cf]) * pd) %*% c(1, 1))
res <- structure(x.p, SE = SE, p = p)
class(res) <- "glm.dose"
res
}
dose.p.glmm(ostrinaA.glmm, cf=1:2, p=0.5)
Error in ostrinaA.glmm$family : $ operator not defined for this S4 class
Any assistance adjusting this code for my model would be greatly appreciated.
At a quick glance I would think replacing
eta <- obj$family$linkfun(p)
with
f <- family(obj)
eta <- f$linkfun(p)
should do the trick.
You also need to replace the res <- ... line with
res <- structure(x.p, SE = matrix(SE), p = p)
This is rather obscure, but is necessary because the print.dose.glm method (from the MASS package) automatically tries to cbind() some stuff together. This fails if SE is a fancy matrix from the Matrix package rather than a vanilla matrix from base R: matrix() does the conversion.
If you are very new to coding, you might not realize that you don't have to change the obj variable name in the code you've copied to ostrina.glmm. In other words, Pikounis's code should work perfectly well with only the two modifications I suggested above.

R: displaying scientific notation

chocolate <- data.frame(
Sabor =
c(5, 7, 3,
4, 2, 6,
5, 3, 6,
5, 6, 0,
7, 4, 0,
7, 7, 0,
6, 6, 0,
4, 6, 1,
6, 4, 0,
7, 7, 0,
2, 4, 0,
5, 7, 4,
7, 5, 0,
4, 5, 0,
6, 6, 3
),
Tipo = factor(rep(c("A", "B", "C"), 15)),
Provador = factor(rep(1:15, rep(3, 15))))
tapply(chocolate$Sabor, chocolate$Tipo, mean)
ajuste <- lm(chocolate$Sabor ~ chocolate$Tipo + chocolate$Provador)
summary(ajuste)
anova(ajuste)
a1 <- aov(chocolate$Sabor ~ chocolate$Tipo + chocolate$Provador)
posthoc <- TukeyHSD(x=a1, 'chocolate$Tipo', conf.level=0.95)
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = chocolate$Sabor ~ chocolate$Tipo + chocolate$Provador)
$`chocolate$Tipo`
diff lwr upr p adj
B-A -0.06666667 -1.803101 1.669768 0.9950379
C-A -3.80000000 -5.536435 -2.063565 0.0000260
C-B -3.73333333 -5.469768 -1.996899 0.0000337
Here is some sample code using TukeyHSD. The output is a matrix, and I want the values to be displayed in scientific notation. I've tried using scipen and setting options(digits = 20) but some of my values from my actual data are still way too small so that the p adj values are 0.00000000000000000000
How can I get the values to be displayed in scientific notation?
You could do this:
format(posthoc, scientific = TRUE)
If you want to change the number of digits, for instance using 3, you could do this:
format(posthoc, scientific = TRUE, digits = 3)

Resources