L2 distance between functional data (smoothed curves) - r

I have used smoothing to create two "functions" fd4 and fd6.
fit6 <- smooth.basis(tid6, zbegfor, fdParobj2)
fd6 <- fit6$fd
I want to measure the L2 distance between them on the interval [0,1], but I haven't been able to find an appropriate way.
||f − g||_2 = sqrt(int(|f(x)-g(x)|^2,0,1))
The best bet has been this one: How to calculate functional L_2 norm using R, but when I use fd6 instead of f <- function(x) x^2, I get the following message:
"Error in fac - fdmat : non-conformable arrays".
I've spent hours trying to find a solution. Please help me!
Now with reproducible code:
library(fda)
# Smoothing of movement pattern without obstacle rescaled to the interval [0,1]
without <- c(22.5050173512478, 22.5038665040295, 22.5171851824298, 22.5368096190746,
22.5770229184757, 22.6709727229898, 22.8195669635573, 23.0285400460222,
23.3240853426905, 23.6895323912605, 24.0905709304813, 24.5674870961964,
25.129085512519, 25.7433521858875, 26.4096817521118, 27.1338935155912,
27.906416101033, 28.7207273157549, 29.5431756517467, 30.3697951466496,
31.2214907341765, 32.0625307132683, 32.8786845916855, 33.671550678219,
34.4449992914392, 35.1852293010227, 35.8866367048324, 36.5650863548079,
37.1776116180247, 37.7706354957587, 38.3082855431959, 38.8044130844639,
39.2471137254193, 39.6193031585418, 39.9685683244076, 40.2345560551869,
40.4394442661545, 40.5712407258558, 40.6905311089523, 40.712419802203,
40.6704560575084, 40.5583379372846, 40.3965425630546, 40.1443139907057,
39.8421899334408, 39.4671160834355, 39.018733225651, 38.5381390971577,
38.035680135599, 37.4625783280288, 36.8649362406917, 36.2320264206665,
35.5599736527209, 34.8983871226943, 34.2058073957721, 33.4893682831911,
32.7568501019309, 32.0241649500974, 31.3036406455137, 30.587636320768,
29.8962657607091, 29.2297665999702, 28.6003939337949, 28.0003531206639,
27.433551463149, 26.9088532545635, 26.4265682839796, 25.974193299003,
25.5553146923473, 25.1701249455904, 24.8107813804098, 24.4776168601955,
24.167582682288, 23.8726502760669, 23.589703789663, 23.3222235336882,
23.0616248799115, 22.8185342685607, 22.6767541125512, 22.6567795841271,
22.6488510112824, 22.6436058079441, 22.6391304188382)
timewithout <- (1:length(without))/length(without) # For scaling
splineBasis = create.bspline.basis(c(0,1), nbasis=25, norder=6) # The basis for smoothing
basis = fdPar(fdobj=splineBasis, Lfdobj=2, lambda=0.00001)
fitwithout <- smooth.basis(timewithout, without, basis) # Smoothing
fdwithout <- fitwithout$fd
# Same but movement is over an obstacle
with <- c(22.4731637093167, 22.4655561889073, 22.4853719755102, 22.4989400065304,
22.5495656349031, 22.666945409755, 22.8368941117498, 23.0846080078369,
23.4160560011242, 23.8285634914224, 24.2923085321078, 24.8297004047422,
25.4884540279408, 26.2107053559, 27.0614232848574, 27.9078055119721,
28.8449720096674, 29.8989669834473, 30.996962022701, 32.1343108758062,
33.3286403418359, 34.6364870430171, 35.9105342483246, 37.1883582665643,
38.467212668323, 39.7381525466373, 41.0395064969214, 42.3095531191294,
43.5708069740233, 44.7881178787717, 45.9965529977777, 47.1643807808923,
48.284786275036, 49.3593991064962, 50.3863035442644, 51.3535489662494,
52.2739716491521, 53.1338828493223, 53.9521101656512, 54.7037562884229,
55.3593092084143, 55.9567618011946, 56.4768579145271, 56.9251919073806,
57.2971965985674, 57.5937987523734, 57.8158626068961, 57.9554856023804,
58.009777126789, 57.9863251605612, 57.8932199088797, 57.6988126618694,
57.4350394069443, 57.1112025796509, 56.7580579506751, 56.2680669960935,
55.6963799946038, 55.0574070566765, 54.3592140352073, 53.6072275005723,
52.7876353306759, 51.9172334605074, 50.9879178368431, 49.9953932631072,
48.9460707853802, 47.8511977258834, 46.6827266395278, 45.4635999409637,
44.2633368255294, 43.0386729762103, 41.7880095105045, 40.4834298069985,
39.1610223705633, 37.9241872458281, 36.7158342529737, 35.5408830466013,
34.4070964101159, 33.307156473109, 32.2514661493348, 31.2475129673168,
30.2990631096187, 29.4096423238141, 28.590173995037, 27.8437368908309,
27.17493959411, 26.5779670740351, 26.0377946174036, 25.5731202027558,
25.1761397934058, 24.8319659155494, 24.5479180062239, 24.2940808334792,
24.09388897537, 23.934861348149, 23.7999923744404, 23.6877461628934,
23.5982309560843, 23.5207597985246, 23.4354446383638, 23.3604065265148,
23.2819126915765, 23.1725048152396, 23.0637455648184, 22.9426779696074,
22.8079176617495, 22.69360227086, 22.6622165457034, 22.6671302753094,
22.66828206305, 22.6703162730529, 22.6715781657376)
timewith <- (1:length(with))/length(with)
fitwith <- smooth.basis(timewith, with, basis) # Smoothing
fdwith <- fitwith$fd
# Plots for understanding
plot(fdwith, col=2) # Smoothed curve for movement over obstacle
plot(fdwithout, col=2, add = TRUE) # Same but no obstacle
# I have to find the L2-distance between these curves

First, one can take advantage of the possibility to perform arithmetic operations with fd objects: fdwith - fdwithout. Second, maybe there is a better way to extract values from fd objects at specific points, but this also works: predict(newdata = 0.5, fdwith - fdwithout). So,
sqrt(integrate(function(x) predict(newdata = x, fdwith-fdwithout)^2, lower = 0, upper = 1)$val)
# [1] 9.592434

Related

Fine intercept of stress strain curve with 0.2% yield line

I am wanting to find the point of intercept between my Stress-Strain curve from a tensile test and the 0.2% offset line.
I am using plot() and lines() to graph this curve and line. The Stress-Strain curve is imported from a csv that contains the raw test data for stress (y axis) and strain (x axis). Below is the code snippet
TH_1_frame <- data.frame(TH_1_Strain, TH_1_Stress) # the x and y data for stress strain curve
Offset <- 0.002
TH_1_Stress02 <- TH_1_Mod*1000*(TH_1_Strain - Offset) # TH_1_Mod is the elastic modulus and the gradient of the straight line
TH_1_Strain02 <- TH_1_Strain + Offset
TH_1_02frame <- data.frame(TH_1_Strain02, TH_1_Stress02) # the x and y for the 0.2% offset line
plot(TH_1_Strain, TH_1_Stress)
lines(TH_1_Strain02, TH_1_Stress02) # plotting the curve and line
Attached is the output of the plot, I am unsure how I would go about mathematically finding the intercept seen in this plot.
Thanks
Tried my best to find the data around this intercept. Can attach the csv if anyone would like.
dput( TH_1_Strain[9450:9500])
c(0.01567022254, 0.01566837231, 0.01566786692, 0.0156739677,
0.01566912234, 0.01566456631, 0.01567340766, 0.01567904775, 0.01568320269,
0.0156837975, 0.01568157723, 0.01568231732, 0.0156824626, 0.01568279291,
0.01568822314, 0.01569599782, 0.01569751774, 0.01569260284, 0.01569438353,
0.01569401845, 0.01569528257, 0.01570160811, 0.01570275302, 0.01571064318,
0.01570881406, 0.01570606853, 0.01571034392, 0.01570852846, 0.01571317886,
0.01572433362, 0.01572758953, 0.01572479432, 0.01569923262, 0.01569651316,
0.01570669313, 0.01571742942, 0.01573171467, 0.01576093584, 0.01573084916,
0.01569425811, 0.01570926358, 0.01572924977, 0.01574603468, 0.01575124015,
0.01574295014, 0.0157436803, 0.0157436803, 0.01571707924, 0.01571890339,
0.01574868957, 0.01576487596)
>
> dput( TH_1_Stress[9450:9500])
c(785.800149043174, 785.821152259166, 785.839877152789, 785.856608787737,
785.871061735784, 785.884019739826, 785.896123281846, 785.908297907524,
785.918265659023, 785.930013418222, 785.937987546514, 785.943825519753,
785.953650739404, 785.966252960625, 785.976007096618, 785.989748114491,
786.001709124664, 786.012175190831, 786.021431012156, 786.030045986964,
786.039230360099, 786.050835587451, 786.059236946754, 786.065715766509,
786.072479649959, 786.076110749018, 786.079812931735, 786.083230415289,
786.085793436822, 786.087502360865, 786.087573444522, 786.084084877311,
786.080524861909, 786.079955463583, 786.078673770551, 786.079172085219,
786.078673770551, 786.078104372224, 786.075469902502, 786.07333411198,
786.072408566301, 786.075968217171, 786.081165708425, 786.085508737659,
786.09042152975, 786.091204543582, 786.09440841163, 786.099392287379,
786.112350291421, 786.12367081961, 786.138550998668)

Fit a dataset that presents an elbow/knee bent using in nlsLM and "force" the coefficients to be near a threshold

Due to the necessity of fitting a dataset that is related to a two dimensional diffusion process D2 process with a sestak berggren model (derived from logistic model) I needed to understand how to use the nlsLM
when in presence of a elbow/knee because the following "easy way did not work"
x=c(1.000000e-05, 1.070144e-05, 1.208082e-05, 1.456624e-05, 1.861581e-05, 2.490437e-05, 3.407681e-05, 4.696710e-05,
6.474653e-05, 8.870800e-05, 1.206194e-04, 1.624442e-04, 2.172716e-04, 2.882747e-04, 3.794489e-04, 4.956619e-04,
6.427156e-04, 8.275095e-04, 1.058201e-03, 1.344372e-03, 1.697222e-03, 2.129762e-03, 2.657035e-03, 3.296215e-03,
4.067301e-03, 4.992831e-03, 6.098367e-03, 7.412836e-03, 8.968747e-03, 1.080251e-02, 1.295471e-02, 1.547045e-02,
1.839960e-02, 2.179713e-02, 2.572334e-02, 3.024414e-02, 3.543131e-02, 4.136262e-02, 4.812205e-02, 5.579985e-02,
6.449256e-02, 7.430297e-02, 8.533991e-02, 9.771803e-02, 1.115573e-01, 1.269824e-01, 1.441219e-01, 1.631074e-01,
1.840718e-01, 2.071477e-01, 2.324656e-01, 2.601509e-01, 2.903210e-01, 3.230812e-01, 3.585200e-01, 3.967033e-01,
4.376671e-01, 4.814084e-01, 5.278744e-01, 5.769469e-01, 6.284244e-01, 6.819947e-01, 7.371982e-01, 7.933704e-01,
8.495444e-01, 9.042616e-01)
ynorm=c(
1.000000e+00, 8.350558e-01, 6.531870e-01, 4.910995e-01, 3.581158e-01, 2.553070e-01, 1.814526e-01, 1.290639e-01,
9.219591e-02, 6.623776e-02, 4.817180e-02, 3.543117e-02, 2.624901e-02, 1.961542e-02, 1.478284e-02, 1.123060e-02,
8.597996e-03, 6.631400e-03, 5.151026e-03, 4.028428e-03, 3.171096e-03, 2.511600e-03, 2.001394e-03, 1.604211e-03,
1.292900e-03, 1.047529e-03, 8.530624e-04, 6.981015e-04, 5.739778e-04, 4.740553e-04, 3.932255e-04, 3.275345e-04,
2.739059e-04, 2.299339e-04, 1.937278e-04, 1.637946e-04, 1.389500e-04, 1.182504e-04, 1.009406e-04, 8.641380e-05,
7.418032e-05, 6.384353e-05, 5.508090e-05, 4.762920e-05, 4.127282e-05, 3.583451e-05, 3.116813e-05, 2.715264e-05,
2.368759e-05, 2.068935e-05, 1.808802e-05, 1.582499e-05, 1.385102e-05, 1.212452e-05, 1.061032e-05, 9.278534e-06,
8.103650e-06, 7.063789e-06, 6.140038e-06, 5.315870e-06, 4.576585e-06, 3.908678e-06, 3.298963e-06, 2.732866e-06,
2.189810e-06, 1.614149e-06)
dfxy <- data.frame(x[1:length(ynorm)],ynorm)
fn=funSel <-"co*((1-x)^m)*(x^n)"
mod_fit <- nlsLM(ynorm~eval(parse(text=fn)),start=c(co=0.5,m=-1,n=0.5),data=dfxy)
plot(dfxy$x,dfxy$y,xlim=c(0,0.001))
plot(dfxy$x,(fitted(mod_fit))[1:length(dfxy$x)],xlim=c(0,0.001))
The only solution I've found is based on https://stackoverflow.com/a/54286595/6483091. So first finding the "elbow" and then applying the regression only to the reduced dataset. Everything in this way works but I was wondering if there can be other solutions (tweaking the parameter of the regression instead of making it in two steps, in some way let nlsLM "recognize" the curve using Dynamic First Derivate Threshold, but still forcing the fn for regression)
Also the "biggest problem is that I alredy know the "range" for the parameters" (i.e.
Applying a regression using "good" starting point (coefficients near the "ground truth" ynorm <- 0.973*(1-x)^(0.425)*x^(-1.008) ) but even if I give them as a starting point there is no way I obtain anything with similar values.
the "ground truth"
plot(x,yrnom) yt <- 0.973*(1-x)^(0.425)*x^(-1.008)
lines(x,yt/max(yt))
Here is a solution using nls and a hyperbolic fit:
x=c(1.000000e-05, 1.070144e-05, 1.208082e-05, 1.456624e-05, 1.861581e-05, 2.490437e-05, 3.407681e-05, 4.696710e-05,
6.474653e-05, 8.870800e-05, 1.206194e-04, 1.624442e-04, 2.172716e-04, 2.882747e-04, 3.794489e-04, 4.956619e-04,
6.427156e-04, 8.275095e-04, 1.058201e-03, 1.344372e-03, 1.697222e-03, 2.129762e-03, 2.657035e-03, 3.296215e-03,
4.067301e-03, 4.992831e-03, 6.098367e-03, 7.412836e-03, 8.968747e-03, 1.080251e-02, 1.295471e-02, 1.547045e-02,
1.839960e-02, 2.179713e-02, 2.572334e-02, 3.024414e-02, 3.543131e-02, 4.136262e-02, 4.812205e-02, 5.579985e-02,
6.449256e-02, 7.430297e-02, 8.533991e-02, 9.771803e-02, 1.115573e-01, 1.269824e-01, 1.441219e-01, 1.631074e-01,
1.840718e-01, 2.071477e-01, 2.324656e-01, 2.601509e-01, 2.903210e-01, 3.230812e-01, 3.585200e-01, 3.967033e-01,
4.376671e-01, 4.814084e-01, 5.278744e-01, 5.769469e-01, 6.284244e-01, 6.819947e-01, 7.371982e-01, 7.933704e-01,
8.495444e-01, 9.042616e-01)
ynorm=c(
1.000000e+00, 8.350558e-01, 6.531870e-01, 4.910995e-01, 3.581158e-01, 2.553070e-01, 1.814526e-01, 1.290639e-01,
9.219591e-02, 6.623776e-02, 4.817180e-02, 3.543117e-02, 2.624901e-02, 1.961542e-02, 1.478284e-02, 1.123060e-02,
8.597996e-03, 6.631400e-03, 5.151026e-03, 4.028428e-03, 3.171096e-03, 2.511600e-03, 2.001394e-03, 1.604211e-03,
1.292900e-03, 1.047529e-03, 8.530624e-04, 6.981015e-04, 5.739778e-04, 4.740553e-04, 3.932255e-04, 3.275345e-04,
2.739059e-04, 2.299339e-04, 1.937278e-04, 1.637946e-04, 1.389500e-04, 1.182504e-04, 1.009406e-04, 8.641380e-05,
7.418032e-05, 6.384353e-05, 5.508090e-05, 4.762920e-05, 4.127282e-05, 3.583451e-05, 3.116813e-05, 2.715264e-05,
2.368759e-05, 2.068935e-05, 1.808802e-05, 1.582499e-05, 1.385102e-05, 1.212452e-05, 1.061032e-05, 9.278534e-06,
8.103650e-06, 7.063789e-06, 6.140038e-06, 5.315870e-06, 4.576585e-06, 3.908678e-06, 3.298963e-06, 2.732866e-06,
2.189810e-06, 1.614149e-06)
dfxy <- data.frame(x[1:length(ynorm)],ynorm)
plot(ynorm ~ x.1.length.ynorm.., data = dfxy)
mod <- nls(ynorm ~ a/x.1.length.ynorm.. + b, data = dfxy, start = list(a = 1, b = 0))
lines(x = dfxy$x.1.length.ynorm.., y = predict(mod, newdata = dfxy$x.1.length.ynorm..))
The fit isn't perfect, though. I guess there is no continuous function to fit a right angle...
Depending on what you want to use the regression for, you could also use a loess regression:
dfxy <- data.frame(x[1:length(ynorm)],ynorm)
names(dfxy) <- c("x", "y")
plot(y ~ x, data = dfxy)
mod <- loess(y ~ x, data = dfxy, span = 0.1)
lines(x = dfxy$x, y = predict(mod, newdata = dfxy$x), col = "red")
Resulting in:

How to visulize the convolution layer and feature layer in mxnet after cnn was finished trained?

I want to plot or visualize the result of each layers out from a trained CNN with mxnet in R. Like w´those abstract art from what a nn's each layer can see.
But I don't know how. Please somebody help me. One way I can think out is to put the weights and bias back to every step and plot the step out. But when I try to put model$arg.params$convolution0_weight back to mx.symbol.Convolution(), I get
Error in mx.varg.symbol.Convolution(list(...)) :
./base.h:291: Unsupported parameter type object type for argument weight, expect integer, logical, or string.
Can anyone help me?
I thought out one way, but encounter a difficulty at one step. Here is what I did.
I found all the trained cnn's parameters inmodel$arg.params , and to compute with parameters we can use mx.nd... founctions as bellow:
`#convolution 1_result
conv1_result<- mxnet::mx.nd.Convolution(data=mx.nd.array(train_array),weight=model$arg.params$convolution0_weight,bias=model$arg.params$convolution0_bias,kernel=c(8,8),num_filter = 50)
str(conv1_result)
tanh1_result<-mx.nd.Activation(data= conv1_result, act_type = "sigmoid")
pool1_result <- mx.nd.Pooling(data = tanh1_result, pool_type = "avg", kernel = c(4,4), stride = c(4,4))
conv2 result
conv2_result<- mxnet::mx.nd.Convolution(data=pool1_result,weight=model$arg.params$convolution1_weight,bias=model$arg.params$convolution1_bias,kernel=c(5,5),num_filter = 50)
tanh2_result<-mx.nd.Activation(data= conv1_result, act_type = "sigmoid")
pool2_result <- mx.nd.Pooling(data = tanh1_result, pool_type = "avg", kernel = c(4,4), stride = c(4,4))
1st fully connected layer result
flat_result <- mx.nd.flatten(data = pool2_result)
fcl_1_result <- mx.nd.FullyConnected(data = flat_result,weight = model$arg.params$fullyconnected0_weight,bias = model$arg.params$fullyconnected0_bias, num_hidden = 500)
tanh_3_result <- mx.nd.Activation(data = fcl_1_result, act_type = "tanh")
2nd fully connected layer result
fcl_2_result <- mx.nd.FullyConnected(data = tanh_3,weight = model$arg.params$fullyconnected1_weight,bias = model$arg.params$fullyconnected1_bias, num_hidden =100)`
but when I came to mx.nd.FullyConnected() step , I encountered not sufficient memory(i have 16 GB RAM) and R crashed.
So, does anyone know how to batch_size the input data in
mx.nd.FullyConnected(), or any method to make mx.nd.FullyConnected() run successfully as mx.model.FeedForward.create()
did?
Here is the code that can help you to achieve what you want. The code below displays activations of 2 convolution layers of LeNet. The code gets as an input MNIST dataset, which is 28x28 grayscale images (downloaded automatically), and produces images as activations.
You can grab outputs from executor. To see the list of available outputs use names(executor$ref.outputs)
The result of each output is available as a matrix with values in [-1; 1] range. The dimensions of the matrix depends on parameters of the layer. The code use these matrices to display as greyscaled images where -1 is white pixel, 1 - black pixel. (most of the code is taken from https://github.com/apache/incubator-mxnet/issues/1152 and massaged a little bit)
The code is a self sufficient to run, but I have noticed that if I build the model second time in the same R session, the names of ouputs get different indices, and later the code fails because the expected names of outputs are hard coded. So if you decide to create a model more than once, you will need to restart R session.
Hope it helps and you can adjust this example to your case.
library(mxnet)
download.file('https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/data/mnist_csv.zip', destfile = 'mnist_csv.zip')
unzip('mnist_csv.zip', exdir = '.')
train <- read.csv('train.csv', header=TRUE)
data.x <- train[,-1]
data.x <- data.x/255
data.y <- train[,1]
val_ind = 1:100
train.x <- data.x[-val_ind,]
train.x <- t(data.matrix(train.x))
train.y <- data.y[-val_ind]
val.x <- data.x[val_ind,]
val.x <- t(data.matrix(val.x))
val.y <- data.y[val_ind]
train.array <- train.x
dim(train.array) <- c(28, 28, 1, ncol(train.x))
val.array <- val.x
dim(val.array) <- c(28, 28, 1, ncol(val.x))
# input layer
data <- mx.symbol.Variable('data')
# first convolutional layer
convLayer1 <- mx.symbol.Convolution(data=data, kernel=c(5,5), num_filter=30)
convAct1 <- mx.symbol.Activation(data=convLayer1, act_type="tanh")
poolLayer1 <- mx.symbol.Pooling(data=convAct1, pool_type="max", kernel=c(2,2), stride=c(2,2))
# second convolutional layer
convLayer2 <- mx.symbol.Convolution(data=poolLayer1, kernel=c(5,5), num_filter=60)
convAct2 <- mx.symbol.Activation(data=convLayer2, act_type="tanh")
poolLayer2 <- mx.symbol.Pooling(data=convAct2, pool_type="max",
kernel=c(2,2), stride=c(2,2))
# big hidden layer
flattenData <- mx.symbol.Flatten(data=poolLayer2)
hiddenLayer <- mx.symbol.FullyConnected(flattenData, num_hidden=500)
hiddenAct <- mx.symbol.Activation(hiddenLayer, act_type="tanh")
# softmax output layer
outLayer <- mx.symbol.FullyConnected(hiddenAct, num_hidden=10)
LeNet1 <- mx.symbol.SoftmaxOutput(outLayer)
# Group some output layers for visual analysis
out <- mx.symbol.Group(c(convAct1, poolLayer1, convAct2, poolLayer2, LeNet1))
# Create an executor
executor <- mx.simple.bind(symbol=out, data=dim(val.array), ctx=mx.cpu())
# Prepare for training the model
mx.set.seed(0)
# Set a logger to keep track of callback data
logger <- mx.metric.logger$new()
# Using cpu by default, but set gpu if your machine has a supported one
devices=mx.cpu(0)
# Train model
model <- mx.model.FeedForward.create(LeNet1, X=train.array, y=train.y,
eval.data=list(data=val.array, label=val.y),
ctx=devices,
num.round=1,
array.batch.size=100,
learning.rate=0.05,
momentum=0.9,
wd=0.00001,
eval.metric=mx.metric.accuracy,
epoch.end.callback=mx.callback.log.train.metric(100, logger))
# Update parameters
mx.exec.update.arg.arrays(executor, model$arg.params, match.name=TRUE)
mx.exec.update.aux.arrays(executor, model$aux.params, match.name=TRUE)
# Select data to use
mx.exec.update.arg.arrays(executor, list(data=mx.nd.array(val.array)), match.name=TRUE)
# Do a forward pass with the current parameters and data
mx.exec.forward(executor, is.train=FALSE)
# List of outputs available.
names(executor$ref.outputs)
# Plot the filters of a sample from validation set
sample_index <- 99 # sample number in validation set. Change it to if you want to see other samples
activation0_filter_count <- 30 # number of filters of the "convLayer1" layer
par(mfrow=c(6,5), mar=c(0.1,0.1,0.1,0.1)) # number of rows x columns in output
dim(executor$ref.outputs$activation0_output)
for (i in 1:activation0_filter_count) {
outputData <- as.array(executor$ref.outputs$activation0_output)[,,i,sample_index]
image(outputData,
xaxt='n', yaxt='n',
col=gray(seq(1,0,-0.1)))
}
activation1_filter_count <- 60 # number of filters of the "convLayer2" layer
dim(executor$ref.outputs$activation1_output)
par(mfrow=c(6,10), mar=c(0.1,0.1,0.1,0.1)) # number of rows x columns in output
for (i in 1:activation1_filter_count) {
outputData <- as.array(executor$ref.outputs$activation1_output)[,,i,sample_index]
image(outputData,
xaxt='n', yaxt='n',
col=gray(seq(1,0,-0.1)))
}
As a result you should see the following images for a validation sample #2 (use RStudio left and right arrows to navigate between them).

Convert ashape3d class to mesh3d

Can somebody help me convert an 'ashape3d' class object to class 'mesh3d'?
In ashape3d, the triangle en tetrahedron faces are are stored in different fields. As I don't think there's a function that can create a mesh3d object from triangles&tetrahedrons simultaneously, I tried the following (pseudocode):
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
vert <- model$x[model$vert[,2]==1,]
vert <- cbind(vert,rep(1,nrow(vert)))
tria <- model$triang[model$triang[,4]==1,1:3]
tetr <- model$tetra[model$tetra[,6]==1,1:4]
m3dTria <- tmesh3d(vertices=vert , indices=tria)
m3dTetr <- qmesh3d(vertices=vert , indices=tetr)
m3d <- mergeMeshes(m3dTria,m3dTetr)
plot.ashape3d(model) # works fine
plot3d(m3d) # Error in x$vb[1, x$it] : subscript out of bounds
Does anybody have a better way?
I needed to do this recently and found this unanswered question. The easiest way to figure out what is going on is to look at plot.ashape3d and read the docs for ashape3d. plot.ashape3d only plots triangles.
The rgl package has a generic as.mesh3d function. This defines a method for that generic function.
as.mesh3d.ashape3d <- function(x, ...) {
if (length(x$alpha) > 1)
stop("I don't know how to handle ashape3d objects with >1 alpha value")
iAlpha = 1
# from help for ashape3d
# for each alpha, a value (0, 1, 2 or 3) indicating, respectively, that the
# triangle is not in the alpha-shape or it is interior, regular or singular
# (columns 9 to last)
# Pick the rows for which the triangle is regular or singular
selrows = x$triang[, 8 + iAlpha] >= 2
tr <- x$triang[selrows, c("tr1", "tr2", "tr3")]
rgl::tmesh3d(
vertices = t(x$x),
indices = t(tr),
homogeneous = FALSE
)
}
You can try it out on the data above
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
plot(model, edges=F, vertices=F)
library(rgl)
model2=as.mesh3d(model)
open3d()
shade3d(model2, col='red')

floating.pie error while using nodelables from ape package

I get an error while using the ARD model of the ace function in R. The error is
Error in floating.pie.asp(XX[i], YY[i], pie[i, ], radius = xrad[i], col = piecol) :
floating.pie: x values must be non-negative
library(ape)
library(phylobase)
tree <- read.nexus("data1.nexus")
plot(tree)
data <- read.csv("phagy_species.csv")
clade.full <- extract.clade(tree, node=91)
plot(clade.full)
clade.1 <- drop.tip(clade.full, "Bar_bre")
clade.2<- drop.tip(clade.1, "Par_pho")
clade.3<- drop.tip(clade.2, "Par_iph")
clade.4<- drop.tip(clade.3, "Eur_ser")
clade.5<- drop.tip(clade.4, "Opo_sym")
clade.6<- drop.tip(clade.5, "Mor_pel")
clade.7<- drop.tip(clade.6, "Aph_hyp")
clade.8<- drop.tip(clade.7, "Ere_oem")
clade.9<- drop.tip(clade.8, "Cal_bud")
clade.10<- drop.tip(clade.9, "Lim_red")
clade.11<- drop.tip(clade.10, "Act_str")
clade.12<- drop.tip(clade.11, "Hel_hec")
clade.13<- drop.tip(clade.12,"Col_dir")
clade.14<- drop.tip(clade.13, "Hyp_pau")
clade.15<- drop.tip(clade.14, "Nym_pol")
clade.16<- drop.tip(clade.15, "Mel_cin")
clade.17<- drop.tip(clade.16,"Apa_iri")
clade.18<- drop.tip(clade.17, "Bib_hyp")
clade.19<- drop.tip(clade.18, "Mar_ors")
clade.20<- drop.tip(clade.19, "Apo_cra")
clade.21<- drop.tip(clade.20, "Pse_par")
clade.22 <- drop.tip(clade.21, "Lep_sin")
clade.23<- drop.tip(clade.22, "Dis_spi")
plot(clade.23)
data2 <- as.numeric(data[,2])
model2 <- ace(data2, clade.23, type="discrete", method="ML", model="ARD")
summary(model2)
d <-logLik(model2)
deviance(model2)
AIC(model2)
plot(clade.23, type="phylogram", cex=0.8, font=3, label.offset = 0.004)
co <- c("red", "blue", "green", "black")
nodelabels(pie = model2$lik.anc, piecol = co, cex = 0.5)
And that is when I get the error. There is no error if I use the original tree without trimming. But, when i trim them to my requirements, it goes in the negative.
Here is the data
tree file
data file
The matrix you are using for the proportions of the pie has complex numbers in it. To see this, try:
class(model2$lik.anc[1,1])
The rows of that matrix define the proportions of the pies, and they need to sum to 1. Your code produces a plot with pies if I replace the pie matrix in the nodelabels function like this:
nodelabels(pie = matrix(0.25, 64, 4), piecol = co, cex = 0.5)
because now there is a legitimate matrix for the pie argument with rows that sum to 1.
As for why you have complex numbers in that matrix, I am not sure. It is probably related to all the warnings produced by the ace in your example. But that is a completely different issue.
I had the same problem with my data. I put my data into the matrix (like Slow Ioris suggested) and then unlisted the matrix.
x <- matrix(data=c(model2$lik.anc[,1],model2$lik.anc[,2],model2$lik.anc[,3],model2$lik.anc[,4]))
plotTree(tree,ftype="i",label.offset = 0.02)
nodelabels(pie = unlist(x))
For other people having the same problem also after purging imaginable parts of their data: The nodelabels function gives the same error when you provide a data.frame instead of a matrix to pie.

Resources