r neuralnet package -- multiple output - r

The way I am currently using neural net is that it predicts one output point from many input points. More specifically, I run the below.
nn <- neuralnet(
as.formula(a ~ c + d),
data=Z, hidden=c(3,2), err.fct="sse", act.fct=custom,
linear.output=TRUE, rep=5)
Here, if Z is a matrix of columns with names a, b, c, it will predict one point from one row in column a from the corresponding points in rows c and d. (The vertical dimension is used as samples for training.)
Suppose there's also a column b. I am wondering if there's a way to predict both, a and b, from c and d? I've tried
as.formula(a+b ~ c+d)
but that does not appear to work.
Any ideas?

My bad, it works nicely using a + b ~ c + d. I thought the function did not accept this input (as it crashed many times), but there must have been another problem which is now gone that I cleaned it all up.
nn <- neuralnet(as.formula(a + b ~ c + d),
data=Z, hidden=c(3,2), err.fct="sse", act.fct=custom,
linear.output=TRUE, rep=5)
Works beautifully and returns two point (or two column) output! Neat.
Examples from neuralnet, the format works :)
AND <- c(rep(0,7),1)
OR <- c(0,rep(1,7))
binary.data <- data.frame(expand.grid(c(0,1), c(0,1), c(0,1)), AND, OR)
print(net <- neuralnet(AND+OR~Var1+Var2+Var3, binary.data, hidden=0,
rep=10, err.fct="ce", linear.output=FALSE))

Related

Trying to fit a response surface with R's formula notation

I have a set of a couple of dozen numeric variables and am trying to figure out how to compactly express a quadratic form in those variables. I also want to include the variables themselves. The idea here is that we are fitting a response surface, rather than interacting a group of treatments, as the standard R formula notation seems to assume. I am trying to get appropriate expressions turned into an R formula, suitable for estimation by different techniques, with different data sets, or over different periods.
If there is an explicit statement of how R's formula notation works, anywhere, I have not been able to find it. There is an ancient paper from which R supposedly copied the notation, but it is by no means identical to current R usage. Every other description I have found just gives examples, that do not cover every case -- not even close to every case.
So, just as an example, here I try to construct a quadratic form in three variables, without writing out all the pairs by hand with an I() around each pair.
library(tidyverse)
A <- B <- C <- 1:10
LHS <- 1:10 * 600
tb <- tibble(LHS, A, B, C)
my_eq <- as.formula(LHS ~ I(A + B + C)*I(A + B + C))
I have not found any way to tell if I have succeded
Neither
my_form_eq nor
terms(my_form_eq)
seem at all enlightening.
For example, can one predict whether
identical(as.formula(LHS ~ I(A + B + C)*I(A + B + C)), as.formula(LHS ~ I((A + B + C)*(A + B + C)))
is true or false? I can not even guess. Or to take an even simpler case, is ~ A * I(A) equal to A, I(A^2), or something else? And how would you know?
To restate my question, I would like either a full statement of how R's formula notation works, adequate to cover every case and predict what each would mean, or, failing that, a straightforward way of producing an expansion of any existing formula into all the atomic terms for which coefficients will be estimated.
This may not answer your question, but I'll post this anyway since I think it may help a little.
The I function inhibits the interpretation of operators such as "+", so your formula is probably not going to do what you expect it to do. For example, the results of lm(my_eq) will be the same as the results of doing the following:
D <- A + B + C
lm(LHS ~ D * D)
And then you may as well just do lm(LHS ~ D).
For your question, I believe John Maindonald wrote a good book that explains R formulas for many situations. But it's in my office and today is a Sunday.
Edit: For the expansion, I believe you have to fit the model and then look at the call or the terms:
> my_eq <- as.formula(LHS ~ (A + B + C) * (A + B + C))
> my_formula <- lm(my_eq)
> attr(terms(my_formula), "term.labels")
[1] "A" "B" "C" "A:B" "A:C" "B:C"

what does this line of code do with the lm() and poly() function create column P$Y

So I have been trying to decipher some code but have hit a roadblock.
can someone please explain to me the what is being done to P$L to get P$HY to get column Y.
I need to understand Functionally (visually how the data frame changes) and from a Mathematical point of view.
Thanks in advance
# create sample data frame
L <- c(15,12,9,6,3,0)
HY <- c(0,0.106,0.277,0.641,0.907,1)
P <- data.frame(L,Y)
# constants
d <- 5
# THIS IS THE PART THAT I DO NOT UNDERSTAND!!
Y <- lm(P$HY ~ poly(P$L, d))
So it re-iterate the question I´m trying to figure out, mathematically and functionally, what Y <- lm(HY ~ poly(L, d)) is doing.
You are building a linear model with HY as the dependent variable and L as the independent variable using a polynomial of degree 5 for L, so 5 terms for this variable. You are saving this model in the variable Y.

Solve a redundant system of linear equations in R

I have a noninvertible matrix A and a vector b for which I believe there is a solution x to Ax = b. I would like to find an example of such x. When I try solve(A,b) in R it produces an error because A is singular. Is there any way to make R give me a random solution?
I eventually tried lm(b ~ 0 + A) which works. It will leave estimates for some columns as NA, which you can substitute for 0 to get an example solution. For example
A = matrix(c(1,1,0,0),nr=2,byrow=F)
b = c(2,2)
lm(b ~ A)
will produce coefficients 2 and NA for the two columns of A. solve(A,b), lsfit(A,b), and qr.solve(A,b) do not work.
Edit: MASS::ginv(A) %*% b works too.

How to find a percentile that can maximize the correlation coefficient between two vector?

Suppose I have two continuous vectors such like:
set.seed(123)
df <- data.frame(x = rnorm(100),
y = rnorm(100,3,5))
with(df, cor(x,y))
My question is how to find a percentile of x so that to maximize the absolute correlation of x and y such that:
perc <- quantile(df$x, 0.3)
df1 <- subset(df, x > perc)
with(df1, cor(x,y))
Namely how to find perc?
This problem is ill defined. Take your example data set and the function you want to find the maximum of (copied from #coffeinjunky):
set.seed(123)
df <- data.frame(x = rnorm(100),
y = rnorm(100,3,5))
findperc <- function(prop, dat) {
perc <- quantile(dat$x, prop)
with(subset(dat, dat$x > perc), abs(cor(x,y)))
}
Now plot the result of findperc for percentiles between 0 and 1.
x <- seq(0,1,0.01)
plot(x,sapply(x,findperc,df),type="l")
The circled point indicates that found by optimize as in #coffeinjunky's answer. This is clearly only a local maximum. The applicability of the warning from #Thierry, "You need to rethink the question. As soon a x and y contain only 2 element the correlation will be either 1 or -1", should be apparent on the right hand side of the plot.
In general, the fact that you are getting moderate to high correlations when starting with independently generated random variables should warn you that your results are spurious and method suspect.
Well, why not take your question literally, and just search for it? For instance, try:
findperc <- function(prop, dat) {
perc <- quantile(dat$x, prop)
with(subset(dat, dat$x > perc), abs(cor(x,y)))
}
optimize(findperc, lower=0, upper=1, maximum=T, dat=df)
This defines a function that computes the absolute correlation between your vectors based on the corresponding percentile (which here is a single value), just as in your example code. And then I feed this function to a linear optimizer which searches for the input that produces the maximum value for the output.
Edit: Thanks to #A. Webb's answer I learned that optimize uses a gradient search as opposed to a grid search. I thought that this was the main difference between optim and optimize, a clearly wrong assumption I should have checked myself. However, just to provide a solution using grid search that will get you closer to the global maximum, one could use the following:
x <- seq(0,0.97,0.01)
x[which.max(sapply(x, findperc, dat=df))]
Note that I have cut x here at 97%. This ensures that at least 3 people are left in the sample (given a sample size of 100).

R: Finding solutions for new x values with nlmrt

Good day,
I have tried to figure this out, but I really can't!! I'll supply an example of my data in R:
x <- c(36,71,106,142,175,210,246,288,357)
y <- c(19.6,20.9,19.8,21.2,17.6,23.6,20.4,18.9,17.2)
table <- data.frame(x,y)
library(nlmrt)
curve <- "y~ a + b*exp(-0.01*x) + (c*x)"
ones <- list(a=1, b=1, c=1)
Then I use wrapnls to fit the curve and to find a solution:
solve <- wrapnls(curve, data=table, start=ones, trace=FALSE)
This is all fine and works for me. Then, using the following, I obtain a prediction of y for each of the x values:
predict(solve)
But how do I find the prediction of y for new x values? For instance:
new_x <- c(10, 30, 50, 70)
I have tried:
predict(solve, new_x)
predict(solve, 10)
It just gives the same output as:
predict(solve)
I really hope someone can help! I know if I use the values of 'solve' for parameters a, b, and c and substitute them into the curve formula with the desired x value that I would be able to this, but I'm wondering if there is a simpler option. Also, without plotting the data first.
Predict requires the new data to be a data.frame with column names that match the variable names used in your model (whether your model has one or many variables). All you need to do is use
predict(solve, data.frame(x=new_x))
# [1] 18.30066 19.21600 19.88409 20.34973
And that will give you a prediction for just those 4 values. It's somewhat unfortunate that any mistakes in specifying the new data results in the fitted values for the original model being returned. An error message probably would have been more useful, but oh well.

Resources