Neural network with dependencies between features - multidimensional-array

here is my issue:
I Have shape create by coordinates before and after a rotation. I am trying to create a neural network with linear regression as output. X1 (x1, y1) and X2(x2, y2) are my input (features) and y_train represents the angle between shape X1 and X2. X_train is a stack of X1 and X2.
here what i have tried:
python
print('shape X1:', np.shape(X1))
print('shape X2:', np.shape(X2))
shape X1:(XXX, 2, 120)
shape X2:(XXX, 2, 120)
print('shape X_train:', np.shape(X_train))
shape X: (XXX, 2, 2, 120)
print('shape y:', np.shape(y_train))
shape X: (XXX, 1)
# Model definition
model = keras.Sequential()
model.add(layers.Dense(X_train.shape[0], input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]), activation='relu'))
model.add(layers.Dense(1, activation="linear"))
Then when I try to train the model with :
model.compile(loss='mse',
optimizer= tf.keras.optimizers.Adam(learning_rate=0.001),
metrics='mse')
model.fit(X_train, y_train, epochs=500, validation_split=0.3, verbose=1)
I have this issue:
""" InvalidArgumentError: Incompatible shapes: [32,2,2,1] vs. [32,1]
[[node gradient_tape/mean_squared_error/BroadcastGradientArgs (defined at :16) ]] [Op:__inference_train_function_310234] """
It is working when I add model.Flatten(), but i don't want to use it cause X1 and X2 will be "mixed"...
Does someone have the solution ? :-)

Related

R tree doesn't use all variables(why?)

Hi I'm working on a decision tree.
tree1=tree(League.binary~TME.factor+APM.factor+Wmd.factor,starcraft)
The tree shows a partitioning based solely on the APM.factor and the leaves aren't pure. here's a screenshot:
I tried creating a tree with a subset with 300 of the 3395 observations and it used more than one variable. What went wrong in the first case? Did it not need the extra two variables so it used only one?
Try playing with the tree.control() parameters, for example setting minsize=1 so that you end up with a single observation in each leaf (overfit), e.g:
model = tree(y ~ X1 + X2, data = data, control = tree.control(nobs=n, minsize = 2, mindev=0))
Also, try the same thing with the rpart package, see what results you get, which is the "new" version of tree. You can also plot the importance of the variables. Here a syntax example:
install.packages("rpart")
install.packages("rpart.plot")
library(rpart)
library(rpart.plot)
## fit tree
### alt1: class
model = rpart(y ~ X1 + X2, data=data, method = "class")
### alt2: reg
model = rpart(y ~ X1 + X2, data=data, control = rpart.control(maxdepth = 30, minsplit = 1, minbucket = 1, cp=0))
## show model
print(model)
rpart.plot(model, cex=0.5)
## importance
model$variable.importance
Note that since trees do binary splits, it is possible that a single variable explains most/all of the SSR (for regression). Try plotting the response for each regressor, see if there's any significant relation to anything but the variable you're getting.
In case you want to run the examples above, here a data simulation (put it at beginning of code):
n = 12000
X1 = runif(n, -100, 100)
X2 = runif(n, -100, 100)
## 1. SQUARE DATA
# y = ifelse( (X1< -50) | (X1>50) | (X2< -50) | (X2>50), 1, 0)
## 2. CIRCLE DATA
y = ifelse(sqrt(X1^2+X2^2)<=50, 0, 1)
## 3. LINEAR BOUNDARY DATA
# y = ifelse(X2<=-X1, 0, 1)
# Create
color = ifelse(y==0,"red","green")
data = data.frame(y,X1,X2,color)
# Plot
data$color = data$color %>% as.character()
plot(data$X2 ~ data$X1, col = data$color, type='p', pch=15)

Julia MethodError: no method matching (::Dense{typeof(logistic),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::Float32)

I have the following training data in CuArrays.
X: 300×8544 CuArray{Float32,2,Nothing}
y: 5×8544 Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}
and I have the following model I want to train:
# define activation
logistic(x) = 1. / (1 .+ exp.(-x))
# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
Dense(64, c),
softmax) |> gpu
# define the loss
loss(x, y) = Flux.crossentropy(model(x), y)
# define the optimiser
optimiser = ADAM()
but if I do
Flux.train!(loss, params(model), zip(X, y), optimiser)
I get the following error:
MethodError: no method matching (::Dense{typeof(logistic),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::Float32)
How should I resolve this?
#D.Danier Please provide minimal working examples (MWE), that means complete code that people can copy and paste and run. Below is an example
#Pkg.activate("c:/scratch/flux-test")
using CuArrays, Flux
CuArrays.allowscalar(false)
# define activation
# you don't the broadcast dots
logistic(x) = 1 / (1 + exp(-x))
# ensure your code works on GPU
CuArrays.#cufunc logistic(x) = 1 / (1 + exp(-x))
X = cu(rand(300, 8544))
y = cu(rand(5, 8544))
c = 5
# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
Dense(64, c),
softmax) |> gpu
# define the loss
loss(x, y) = Flux.crossentropy(model(x), y) |> gpu
model(X)
# define the optimiser
optimiser = ADAM()
loss(X, y)
Flux.train!(loss, params(model), zip(eachcol(X), eachcol(y)), optimiser)
When you Flux.train!, you must tell Flux that you want to pair up the columns of X and y to compute the loss. BTW, this is probably less than ideal as it's computing too many iterations. You may want to group them up into mini-batches. Or if your problem is genuinely this small, then you can want to compute the whole thing in one go e.g.
Flux.train!(loss, params(model), (X, y), optimiser)
which basically say compute the loss based on the whole of X and y.

Arc length of piecewise spline using R

I know there are many ways to calculate the arc length of curve, but I am looking for an efficient way to calculate the arc length of a piecewise spline through irregularly spaced points.
The actual curve I'm trying to find the length of is quite complex (contour line) so here is a quick example using a circle where the actual arclength is known to be 2*pi:
# Generate "random" data
set.seed(50)
theta = seq(0, 2*pi, length.out = 50) + runif(50, -0.05, 0.05)
theta = c(0, theta[theta >=0 & theta <= 2*pi], 2*pi)
data = data.frame(x = cos(theta), y = sin(theta))
# Bezier Curve fit
library("bezier")
bezierArcLength(data, t1=0, t2=1)$arc.length
# Calculate arc length using euclidean distance
library("dplyr")
data$eucdist = sqrt((data$x - lag(data$x))^2 + (data$y - lag(data$y))^2)
print(paste("Euclidean distance:", sum(data$eucdist[-1])))
print(paste("Actual distance:", 2*pi))
# Output
Bezier distance: 5.864282
Euclidean distance: 6.2779
Actual distance: 6.2831
The closest thing I have found is https://www.rdocumentation.org/packages/pracma/versions/1.9.9/topics/arclength but I would have to parameterise my data to be some function(t) ...spline(data, t)... to use arclength. I tried this, but the fitted spline ran along the middle of the circle rather than along the circumference.
Another alternative I have been (unsuccessfully) trying is fit piecewise splines and determine the length of each spline.
Any help would be much appreciated!
EDIT: Added alternate method using the Bezier package, but the arc length found is even worse than just using the Euclidean method.
In lieu of community answers, I've cobbled together a solution which seems to work for what I was after! I'll leave my code here in case anyone has the same question and comes across this.
# Libraries
library("bezier")
library("pracma")
library("dplyr")
# Very slow for loops, sorry! Didn't write it as an apply function
output = data.frame()
for (i in 1:100) {
# Generate "random" data
# set.seed(50)
theta = seq(0, 2*pi, length.out = 50) + runif(50, -0.1, 0.1)
theta = sort(theta)
theta = c(0, theta[theta >=0 & theta <= 2*pi], 2*pi)
data = data.frame(x = cos(theta), y = sin(theta))
# Bezier Curve fit
b = bezierArcLength(data, t1=0, t2=1)$arc.length
# Pracma Piecewise cubic
t = atan2(data$y, data$x)
t = t + ifelse(t < 0, 2*pi, 0)
csx <- cubicspline(t, data$x)
csy <- cubicspline(t, data$y)
dcsx = csx; dcsx$coefs = t(apply(csx$coefs, 1, polyder))
dcsy = csy; dcsy$coefs = t(apply(csy$coefs, 1, polyder))
ds <- function(t) sqrt(ppval(dcsx, t)^2 + ppval(dcsy, t)^2)
s = integral(ds, t[1], t[length(t)])
# Calculate arc length using euclidean distance
data$eucdist = sqrt((data$x - lag(data$x))^2 + (data$y - lag(data$y))^2)
e = sum(data$eucdist[-1])
# Use path distance as parametric variable
data$d = c(0, cumsum(data$eucdist[-1]))
csx <- cubicspline(data$d, data$x)
csy <- cubicspline(data$d, data$y)
dcsx = csx; dcsx$coefs = t(apply(csx$coefs, 1, polyder))
dcsy = csy; dcsy$coefs = t(apply(csy$coefs, 1, polyder))
ds <- function(t) sqrt(ppval(dcsx, t)^2 + ppval(dcsy, t)^2)
d = integral(ds, data$d[1], data$d[nrow(data)])
# Actual value
a = 2*pi
# Append to result
output = rbind(
output,
data.frame(bezier=b, cubic.spline=s, cubic.spline.error=(s-a)/a*100,
euclidean.dist=e, euclidean.dist.error=(e-a)/a*100,
dist.spline=d, dist.spline.error=(d-a)/a*100))
}
# Summary
apply(output, 2, mean)
# Summary output
bezier cubic.spline cubic.spline.error euclidean.dist euclidean.dist.error dist.spline dist.spline.error
5.857931e+00 6.283180e+00 -7.742975e-05 6.274913e+00 -1.316564e-01 6.283085683 -0.001585570
I still don't quite understand what bezierArcLength does, but I'm very happy with my solution using cubicspline from the pracma package as it is a lot more accurate.
Other solutions are still more than welcome!

plotting lrc in SSasymp in R

My question is similar to the unanswered here: working with SSasymp in r
For a simple SSmicmen:
x1 = seq (0,10,1)
y1 = SSmicmen(x1, Vm=10, K=0.5)
plot(y1 ~ x1, type="l")
the value of K is easily identified in the point (5, 0.5), the value of half the maximum growth.
Given a simple SSasympOrig:
x2 = seq (0,10,1)
y2 = SSasympOrig(x2, Asym=10, lrc=0.1)
# Asym*(1 - exp(-exp(lrc)*input))
plot(y2 ~ x2, type="l")
is there a way to represent and/or identify the meaning and/or effect of the parameter "lcr" on the resulting graph, in a similar way as the example above?
Sure, you can visualize this:
x2 = seq (0,10,0.01)
y2 = SSasympOrig(x2, Asym=10, lrc=0.1)
# Asym*(1 - exp(-exp(lrc)*input))
plot(y2 ~ x2, type="n")
for (lrc in (10^((-5):1))) {
y2 = SSasympOrig(x2, Asym=10, lrc=lrc)
# Asym*(1 - exp(-exp(lrc)*input))
lines(y2 ~ x2, type="l", col = 6+log10(lrc))
}
This parameter controls how fast the asymptote is approached. Getting this from studying the equation requires highschool-level maths skills. Or you could try reading the Wikipedia entry about the half-life:
y2 = SSasympOrig(x2, Asym=10, lrc=0.1)
# Asym*(1 - exp(-exp(lrc)*input))
plot(y2 ~ x2, type="l")
points(x = log(2) / exp(0.1), y = 0.5 * 10)

R: nls() error. "singular gradient matrix at initial parameter estimates"

I have a simple example below (which doesn't work) that attempts to do multivariate fit using the default algorithm (Gauss-Newton). I get the error: "Error in nlsModel(formula, mf, start, wts) : singular gradient matrix at initial parameter estimates".
## Defining the two independent x variables, and the one dependent y variable.
x1 = 1:100*.01
x2 = (1:100*.01)^2
y1 = 2*x1 + 0.5*x2
## Putting into a data.frame for nls() funcion.
df = data.frame(x1, x2, y1)
## Starting parameters: a = 2.1, b = 0.4 (and taking c = 0)
fit_results <-nls(y1 ~ x1*a + x2*b +c, data=df, start=c(a=2.1, b=0.4, c=0))
Note: even when I set a = 2, and b = 0.5 above, I still get the same error message.
Thanks Brian, not sure how to make a comment the selected answer. Here is code that works... turns out I needed to add more randomness in the y1 dependent variable.
## Defining the two independent x variables, and the one dependent y variable.
x1 = 1:100*0.1
x2 = runif(100,0,10)
y1 = 2*x1 + 0.5*x2*runif(100,0.9,1.1)
## Putting into a data.frame for nls() funcion.
df = data.frame(x1, x2, y1)
fit_results <-nls(y1 ~ x1*a + x2*b +c, data=df, start=c(a=2.1, b=0.4, c=0))

Resources