I am trying to predict which variables impact lift, which is sales rate for food goods on promotion. In my dataset, lift is my dependent variable and I have eight possible independent variables.Here are the first couple of rows of my dataset.
I need to do this analysis for 20 different products across 30 different stores. I want to know if it is possible to run 20 regressions on all of the products simultaneously in R. This way I would only have to run 30 regressions manually, one for each store, and I would get results for each store. I would like to use stepwise because this is what I am familiar with.
Here is the code I have written so far using only one regression at a time:
data0<- subset(data0, Store == "Store 1")
data0<- subset(data0, Product == "Product 1")
########Summary Stats
head(data0)
summary(data0)
str(data0)
###Data Frame
data0<-pdata.frame(data0, index=c("Product","Time"))
data0<-data.frame(data0)
###Stepwise
step_qtr_1v<- lm(Lift ~
+ Depth
+ Length
+ Copromotion
+ Category.Sales.On.Merch
+ Quality.Support.Binary
, data = data0)
summary(step_qtr_1v)
I am new to R so would appreciate simplicity. Thank you.
Its really important to follow the guidelines when asking a question. Nonetheless, I've made a toy example with the iris dataset.
In order to run the same regressions multiple times over different parts of your dataset, you can use the lapply() function, which applies a function over a vector or list (in this case, the name of the species). The only thing you have to do is pass this to the subset argument in the lm() function:
data("iris")
species <- unique(iris$Species)
species
Running species shows the levels of this variable:
[1] setosa versicolor virginica
Levels: setosa versicolor virginica
And running colnames(iris) tells us what variables to use:
[1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
The lapply function can be run thereafter like so:
models <- lapply(species, function(x) {
lm(Petal.Length ~ Petal.Width + Sepal.Length + Sepal.Width,
data = iris, subset = iris$Species == x)
})
lapply(models, summary)
The result:
[[1]]
Call:
lm(formula = Petal.Length ~ Petal.Width + Sepal.Length + Sepal.Width,
data = iris, subset = iris$Species == x)
Residuals:
Min 1Q Median 3Q Max
-0.38868 -0.07905 0.00632 0.10095 0.48238
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.86547 0.34331 2.521 0.0152 *
Petal.Width 0.46253 0.23410 1.976 0.0542 .
Sepal.Length 0.11606 0.10162 1.142 0.2594
Sepal.Width -0.02865 0.09334 -0.307 0.7602
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1657 on 46 degrees of freedom
Multiple R-squared: 0.1449, Adjusted R-squared: 0.08914
F-statistic: 2.598 on 3 and 46 DF, p-value: 0.06356
[[2]]
Call:
lm(formula = Petal.Length ~ Petal.Width + Sepal.Length + Sepal.Width,
data = iris, subset = iris$Species == x)
Residuals:
Min 1Q Median 3Q Max
-0.61706 -0.13086 -0.02966 0.09854 0.54311
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.16506 0.40032 0.412 0.682
Petal.Width 1.36021 0.23569 5.771 6.37e-07 ***
Sepal.Length 0.43586 0.07938 5.491 1.67e-06 ***
Sepal.Width -0.10685 0.14625 -0.731 0.469
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2319 on 46 degrees of freedom
Multiple R-squared: 0.7713, Adjusted R-squared: 0.7564
F-statistic: 51.72 on 3 and 46 DF, p-value: 8.885e-15
[[3]]
Call:
lm(formula = Petal.Length ~ Petal.Width + Sepal.Length + Sepal.Width,
data = iris, subset = iris$Species == x)
Residuals:
Min 1Q Median 3Q Max
-0.7325 -0.1493 0.0516 0.1555 0.5866
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.46503 0.47686 0.975 0.335
Petal.Width 0.21565 0.17410 1.239 0.222
Sepal.Length 0.74297 0.07129 10.422 1.07e-13 ***
Sepal.Width -0.08225 0.15999 -0.514 0.610
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2819 on 46 degrees of freedom
Multiple R-squared: 0.7551, Adjusted R-squared: 0.7391
F-statistic: 47.28 on 3 and 46 DF, p-value: 4.257e-14
BTW, you are not performing any stepwise regression in your code. But the above example can be easily modified to do so.
Hope this helps.
Related
Let's say I'm working with the iris dataset in R:
data(iris)
summary(iris)
Sepal.Length Sepal.Width Petal.Length Petal.Width
Min. : 4.300 Min. : 2.000 Min. : 1.000 Min. : 0.100
1st Qu.: 5.100 1st Qu.: 2.800 1st Qu.: 1.600 1st Qu.: 0.300
Median : 5.800 Median : 3.000 Median : 4.350 Median : 1.300
Mean : 5.843 Mean : 3.057 Mean : 3.758 Mean : 1.199
3rd Qu.: 6.400 3rd Qu.: 3.300 3rd Qu.: 5.100 3rd Qu.: 1.800
Max. : 7.900 Max. : 4.400 Max. : 6.900 Max. : 2.500
Species
setosa : 50
versicolor: 50
virginica : 50
I want to perform a linear regression where Petal.Length is the dependent variable, and Sepal.Length is the independent variable. How can I, in R, perform this regression for each Species category at once, getting values of P, R² and F for each test?
Use by.
by(iris, iris$Species, \(x) summary(lm(Petal.Length ~ Sepal.Length, x)))
# iris$Species: setosa
#
# Call:
# lm(formula = Petal.Length ~ Sepal.Length, data = x)
#
# Residuals:
# Min 1Q Median 3Q Max
# -0.40856 -0.08027 -0.00856 0.11708 0.46512
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.80305 0.34388 2.335 0.0238 *
# Sepal.Length 0.13163 0.06853 1.921 0.0607 .
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 0.1691 on 48 degrees of freedom
# Multiple R-squared: 0.07138, Adjusted R-squared: 0.05204
# F-statistic: 3.69 on 1 and 48 DF, p-value: 0.0607
#
# ---------------------------------------------------------
# iris$Species: versicolor
#
# Call:
# lm(formula = Petal.Length ~ Sepal.Length, data = x)
#
# Residuals:
# Min 1Q Median 3Q Max
# -0.68611 -0.22827 -0.04123 0.19458 0.79607
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.18512 0.51421 0.360 0.72
# Sepal.Length 0.68647 0.08631 7.954 2.59e-10 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 0.3118 on 48 degrees of freedom
# Multiple R-squared: 0.5686, Adjusted R-squared: 0.5596
# F-statistic: 63.26 on 1 and 48 DF, p-value: 2.586e-10
#
# ---------------------------------------------------------
# iris$Species: virginica
#
# Call:
# lm(formula = Petal.Length ~ Sepal.Length, data = x)
#
# Residuals:
# Min 1Q Median 3Q Max
# -0.68603 -0.21104 0.06399 0.18901 0.66402
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.61047 0.41711 1.464 0.15
# Sepal.Length 0.75008 0.06303 11.901 6.3e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 0.2805 on 48 degrees of freedom
# Multiple R-squared: 0.7469, Adjusted R-squared: 0.7416
# F-statistic: 141.6 on 1 and 48 DF, p-value: 6.298e-16
Edit
To elaborate on my comment, we can extract the desired values very easily doing
by(iris, iris$Species, \(x) lm(Petal.Length ~ Sepal.Length, x)) |>
lapply(\(x) {
with(summary(x), c(r2=r.squared, f=fstatistic,
p=do.call(pf, c(as.list(unname(fstatistic)), lower.tail=FALSE))))
}) |> do.call(what=rbind)
# r2 f.value f.numdf f.dendf p
# setosa 0.07138289 3.689765 1 48 6.069778e-02
# versicolor 0.56858983 63.263024 1 48 2.586190e-10
# virginica 0.74688439 141.636664 1 48 6.297786e-16
If you would like to pull out those values, we can use
library (dplyr)
df <- iris
list_res <- df %>%
base::split (., df$Species, drop = FALSE) %>%
lapply (., function (x) {
fit <- lm(Petal.Length ~ Sepal.Length, data = x) %>%
summary ()
r <- fit$r.squared
coeffs <- fit$coefficients %>%
as_tibble ()
f <- fit$fstatistic[[1]]
list_res <- list (r, coeffs, f)
names (list_res) <- c("R-Squared", "Coefficients", "F-Value")
return (list_res)
})
That returns a list of three objects for each regression model including the desired values. I've left the coefficients table here as it is, since it's always good to know to which independent variable your p-values belong. If you want those p-values pulled out separately, we can use coeffs <- fit$coefficients [,4] %>% as.list (), for instance.
Say I have a table and I remove all the inapplicable values and I ran a regression. If I ran the exact same regression on the same table, but this time instead of removing the inapplicable values, I turned them into NA values, would the regression still give me the same coefficients?
The regression would omit any NA values prior to doing the analysis (i.e. deleting any row that contains a missing NA in any of the predictor variables or the outcome variable). You can check this by comparing the degrees of freedom and other statistics for both models.
Here's a toy example:
head(mtcars)
# check the data set size (all non-missings)
dim(mtcars) # has 32 rows
# Introduce some missings
set.seed(5)
mtcars[sample(1:nrow(mtcars), 5), sample(1:ncol(mtcars), 5)] <- NA
head(mtcars)
# Create an alternative where all missings are omitted
mtcars_NA_omit <- na.omit(mtcars)
# Check the data set size again
dim(mtcars_NA_omit) # Now only has 27 rows
# Now compare some simple linear regressions
summary(lm(mpg ~ cyl + hp + am + gear, data = mtcars))
summary(lm(mpg ~ cyl + hp + am + gear, data = mtcars_NA_omit))
Comparing the two summaries you can see that they are identical, with the one exception that for the first model, there's a warning message that 5 csaes have been dropped due to missingness, which is exactly what we did manually in our mtcars_NA_omit example.
# First, original model
Call:
lm(formula = mpg ~ cyl + hp + am + gear, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.0835 -1.7594 -0.2023 1.4313 5.6948
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.64284 7.02359 4.220 0.000352 ***
cyl -1.04494 0.83565 -1.250 0.224275
hp -0.03913 0.01918 -2.040 0.053525 .
am 4.02895 1.90342 2.117 0.045832 *
gear 0.31413 1.48881 0.211 0.834833
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.947 on 22 degrees of freedom
(5 observations deleted due to missingness)
Multiple R-squared: 0.7998, Adjusted R-squared: 0.7635
F-statistic: 21.98 on 4 and 22 DF, p-value: 2.023e-07
# Second model where we dropped missings manually
Call:
lm(formula = mpg ~ cyl + hp + am + gear, data = mtcars_NA_omit)
Residuals:
Min 1Q Median 3Q Max
-5.0835 -1.7594 -0.2023 1.4313 5.6948
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.64284 7.02359 4.220 0.000352 ***
cyl -1.04494 0.83565 -1.250 0.224275
hp -0.03913 0.01918 -2.040 0.053525 .
am 4.02895 1.90342 2.117 0.045832 *
gear 0.31413 1.48881 0.211 0.834833
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.947 on 22 degrees of freedom
Multiple R-squared: 0.7998, Adjusted R-squared: 0.7635
F-statistic: 21.98 on 4 and 22 DF, p-value: 2.023e-07
I have problem that I have been trying to solve for a couple of hours now but I simply can't figure it out (I'm new to R btw..).
Basically, what I'm trying to do (using mtcars to illustrate) is to make R test different independent variables (while adjusting for "cyl" and "disp") for the same independent variable ("mpg"). The best soloution I have been able to come up with is:
lm <- lapply(mtcars[,4:6], function(x) lm(mpg ~ cyl + disp + x, data = mtcars))
summary <- lapply(lm, summary)
... where 4:6 corresponds to columns "hp", "drat" and "wt".
This acutually works OK but the problem is that the summary appers with an "x" instead of for instace "hp":
$hp
Call:
lm(formula = mpg ~ cyl + disp + x, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-4.0889 -2.0845 -0.7745 1.3972 6.9183
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 34.18492 2.59078 13.195 1.54e-13 ***
cyl -1.22742 0.79728 -1.540 0.1349
disp -0.01884 0.01040 -1.811 0.0809 .
x -0.01468 0.01465 -1.002 0.3250
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.055 on 28 degrees of freedom
Multiple R-squared: 0.7679, Adjusted R-squared: 0.743
F-statistic: 30.88 on 3 and 28 DF, p-value: 5.054e-09
Questions:
Is there a way to fix this? And have I done this in the smartest way using lapply, or would it be better to use for instance for loops or other options?
Ideally, I would also very much like to make a table showing for instance only the estimae and P-value for each dependent variable. Can this somehow be done?
Best regards
One approach to get the name of the variable displayed in the summary is by looping over the names of the variables and setting up the formula using paste and as.formula:
lm <- lapply(names(mtcars)[4:6], function(x) {
formula <- as.formula(paste0("mpg ~ cyl + disp + ", x))
lm(formula, data = mtcars)
})
summary <- lapply(lm, summary)
summary
#> [[1]]
#>
#> Call:
#> lm(formula = formula, data = mtcars)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -4.0889 -2.0845 -0.7745 1.3972 6.9183
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 34.18492 2.59078 13.195 1.54e-13 ***
#> cyl -1.22742 0.79728 -1.540 0.1349
#> disp -0.01884 0.01040 -1.811 0.0809 .
#> hp -0.01468 0.01465 -1.002 0.3250
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.055 on 28 degrees of freedom
#> Multiple R-squared: 0.7679, Adjusted R-squared: 0.743
#> F-statistic: 30.88 on 3 and 28 DF, p-value: 5.054e-09
Concerning the second part of your question. One way to achieve this by making use of broom::tidy from the broom package which gives you a summary of regression results as a tidy dataframe:
lapply(lm, broom::tidy)
#> [[1]]
#> # A tibble: 4 x 5
#> term estimate std.error statistic p.value
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 (Intercept) 34.2 2.59 13.2 1.54e-13
#> 2 cyl -1.23 0.797 -1.54 1.35e- 1
#> 3 disp -0.0188 0.0104 -1.81 8.09e- 2
#> 4 hp -0.0147 0.0147 -1.00 3.25e- 1
We could use reformulate to create the formula for the lm
lst1 <- lapply(names(mtcars)[4:6], function(x) {
fmla <- reformulate(c("cyl", "disp", x),
response = "mpg")
model <- lm(fmla, data = mtcars)
model$call <- deparse(fmla)
model
})
Then, get the summary
summary1 <- lapply(lst1, summary)
summary1[[1]]
#Call:
#"mpg ~ cyl + disp + hp"
#Residuals:
# Min 1Q Median 3Q Max
#-4.0889 -2.0845 -0.7745 1.3972 6.9183
#Coefficients:
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 34.18492 2.59078 13.195 1.54e-13 ***
#cyl -1.22742 0.79728 -1.540 0.1349
#disp -0.01884 0.01040 -1.811 0.0809 .
#hp -0.01468 0.01465 -1.002 0.3250
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#Residual standard error: 3.055 on 28 degrees of freedom
#Multiple R-squared: 0.7679, Adjusted R-squared: 0.743
#F-statistic: 30.88 on 3 and 28 DF, p-value: 5.054e-09
I don't know how to interpret the Sum of Squares in a numeric independent variable.
summary(aov(Petal.Width ~ Petal.Length + Species, iris))
## Df Sum Sq Mean Sq F value Pr(>F)
## Petal.Length 1 80.26 80.26 2487.02 < 2e-16 ***
## Species 2 1.60 0.80 24.77 5.48e-10 ***
## Residuals 146 4.71 0.03
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The sum of squares in Species are clear to me (sum of squared deviations from the group means) but how to interpret them if you have a numeric independent variable like Petal.Length?
The components of this linear model are not orthogonal so we cannot
calculate the sum of squares (SS) of each component independently of the others. Rather we must take a sequence of model comparisons. In this case aov
considers these models owing to the order in which the components were listed in the formula.
fm0 <- lm(Petal.Width ~ 1, iris) # null model
fm1 <- lm(Petal.Width ~ Petal.Length, iris)
fm2 <- lm(Petal.Width ~ Petal.Length + Species, iris) # full model
Note that the residual sum of squares (RSS) of a model fm is sum(resid(fm)^2) and R has a function specifically for this which is deviance(fm). Keeping this in mind we can decompose the RSS of the null model like this:
deviance(fm0) # RSS of null model
= (deviance(fm0) - deviance(fm1) # SS of Petal.Length
+ (deviance(fm1) - deviance(fm2) # SS of Species
+ deviance(fm2) # RSS of full model
and each sum of squares reported in the table in the question is one of the
lines above. That is,
deviance(fm0) - deviance(fm1) # SS of Petal.Length
## [1] 80.25984
deviance(fm1) - deviance(fm2) # SS of Species
## [1] 1.598453
deviance(fm2) # RSS of full model
## [1] 4.711643
Note
Note that the SS values we get depend on the sequence of models we use. For example, if we use this sequence which considers Species before Petal.Length (whereas above we considered Petal.Length and then Species) we get difference SS values.
fm0 # same null model as above
fm1a <- lm(Petal.Width ~ Species, iris)
fm2 # same full model as above
deviance(fm0) - deviance(fm1a) # SS of Species
## [1] 80.41333
deviance(fm1a) - deviance(fm2) # SS of Petal.Length
## [1] 1.444957
deviance(fm2) # RSS of full model
## [1] 4.711643
and note that this does correspond to aov if we list the components in that order, i.e. this time we listed Species before Petal.Length to change the sequence of models that aov would consider:
summary(aov(Petal.Width ~ Species + Petal.Length, iris))
## Df Sum Sq Mean Sq F value Pr(>F)
## Species 2 80.41 40.21 1245.89 < 2e-16 ***
## Petal.Length 1 1.44 1.44 44.77 4.41e-10 ***
## Residuals 146 4.71 0.03
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
I'm performing the multiple regression to find the best model to predict the prices. See as following for the output in the R console.
I'd like to store the first column (Estimates) into a row/matrix or data frame for future use such as using R shiny to deploy on the web.
*(Price = 698.8+0.116*voltage-70.72*VendorCHICONY
-36.6*VendorDELTA-66.8*VendorLITEON-14.86*H)*
Can somebody kindly advise?? Thanks in advance.
Call:
lm(formula = Price ~ Voltage + Vendor + H, data = PSU2)
Residuals:
Min 1Q Median 3Q Max
-10.9950 -0.6251 0.0000 3.0134 11.0360
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 698.821309 276.240098 2.530 0.0280 *
Voltage 0.116958 0.005126 22.818 1.29e-10 ***
VendorCHICONY -70.721088 9.308563 -7.597 1.06e-05 ***
VendorDELTA -36.639685 5.866688 -6.245 6.30e-05 ***
VendorLITEON -66.796531 6.120925 -10.913 3.07e-07 ***
H -14.869478 6.897259 -2.156 0.0541 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.307 on 11 degrees of freedom
Multiple R-squared: 0.9861, Adjusted R-squared: 0.9799
F-statistic: 156.6 on 5 and 11 DF, p-value: 7.766e-10
Use coef on your lm output.
e.g.
m <- lm(Sepal.Length ~ Sepal.Width + Species, iris)
summary(m)
# Call:
# lm(formula = Sepal.Length ~ Sepal.Width + Species, data = iris)
# Residuals:
# Min 1Q Median 3Q Max
# -1.30711 -0.25713 -0.05325 0.19542 1.41253
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 2.2514 0.3698 6.089 9.57e-09 ***
# Sepal.Width 0.8036 0.1063 7.557 4.19e-12 ***
# Speciesversicolor 1.4587 0.1121 13.012 < 2e-16 ***
# Speciesvirginica 1.9468 0.1000 19.465 < 2e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 0.438 on 146 degrees of freedom
# Multiple R-squared: 0.7259, Adjusted R-squared: 0.7203
# F-statistic: 128.9 on 3 and 146 DF, p-value: < 2.2e-16
coef(m)
# (Intercept) Sepal.Width Speciesversicolor Speciesvirginica
# 2.2513932 0.8035609 1.4587431 1.9468166
See also names(m) which shows you some things you can extract, e.g. m$residuals (or equivalently, resid(m)).
And also methods(class='lm') will show you some other functions that work on a lm.
> methods(class='lm')
[1] add1 alias anova case.names coerce confint cooks.distance deviance dfbeta dfbetas drop1 dummy.coef effects extractAIC family
[16] formula hatvalues influence initialize kappa labels logLik model.frame model.matrix nobs plot predict print proj qr
[31] residuals rstandard rstudent show simulate slotsFromS3 summary variable.names vcov
(oddly, 'coef' is not in there? ah well)
Besides, I'd like to know if there is command to show the "residual percentage"
=(actual value-fitted value)/actual value"; currently the "residuals()" command can
only show the below info but I need the percentage instead.
residuals(fit3ab)
1 2 3 4 5 6
-5.625491e-01 -5.625491e-01 7.676578e-15 -8.293815e+00 -5.646900e+00 3.443652e+00