torch code for three-dimensional multi-layer perception - torch

I want to transfer the data A (100, 196, 512) to data B(100, 1, 512).
It seems like a multi-layer perception. However, the function nn.Linear cannot be used for three dimension. How to resolve this?
Thanks!

So essentially you want to put a batch of 100*512 vectors of dimension 196 through a network. To do this you need to essentially restate the problem as a batch of 1D problems, e.g.
model = nn.Sequential()
model:add( nn.Transpose({2},{3}) )
model:add( nn.View(100*512,196) )
model:add( nn.Linear(196,1) )
model:add( nn.View(100,512,196) )
model:add( nn.Transpose({2},{3}) )
This would be easier if your data was A(100,512,196), requiring only two nn.View modules.

Related

No decrease in loss after lots traning

I was trying to train a convolution network. But it is not improving, i.e. loss is not decreasing. And the train function is also terminating much more quickly than usual. Below is the minimal code to show the problem.
using Flux
data=rand(200, 100, 1, 50)
label=rand([0.0,1.0], 1, 50)
model=Chain(
Conv((3,3), 1=>5, pad=(1,1)),
MaxPool((2,2)),
Conv((3,3), 5=>5, pad=(1,1)),
MaxPool((2,2)),
Conv((3,3), 5=>5, pad=(1,1)),
MaxPool((2,2)),
x->reshape(x, :, size(x, 4)),
x->σ.(x),
Dense(1500,100),
Dense(100,1)
)
model(data)
loss=Flux.mse
opt=Descent(0.1)
param=params(model)
loss(model(data), label) #=>0.3492440767136241
Flux.train!(loss, param, zip(data, label), opt)
loss(model(data), label) #=>0.3492440767136241
The first argument to Flux.train! needs to be function which accepts the data, runs the model, and returns the loss. Its loop looks something like this:
for dtup in zip(data, label)
gradient(() -> loss(dtup...), params)
...
end
But the function loss you provide doesn't call the model at all, it just compares the data point to the label directly.
There is more to fix here though. What's being iterated over is tuples of numbers, starting with zip(data, label) |> first, which I don't think is what you want. Maybe you wanted Flux.DataLoader to iterate batches of images?

Operation along third dimension of two 3-D arrays in R

I have two 3D arrays, the dimensions are specifically [Longitudes][Latitudes][Time] and the two array are temperature and precipitation observations in space and time.
I would like to obtain a 2D matrix [Longitudes][Latitudes] of the correlation (along the [Time] dimension) between the temperature and precipitation array for each specific Longitude-Latitude point.
The function apply() only work with one array at the time and the only solution I came out with is a basic loop, being Ts and Ps respectively the Temperature and Precipitation 3D arrays this is what I wrote:
corr.matix <- array(dim = dim(Ts)[c(1,2)])
for (i in seq(dim(Ts)[1])){
for (j in seq(dim(Ts)[2])){
corr.matix[i,j] <- cor(Ts[i,j,],Ps[i,j,])
}
}
It works, however it is slow.
My question is, is there a faster (vectorised?) way to solve this simple problem in R?
It would help to have some sample working code to test, but could map2() from package purrrhelp?
See https://purrr.tidyverse.org/reference/map2.html

Importing one dimensional dataset for Complete Spatial Randomness win spatstat

I have a set of one-dimensional data points (locations on a segment), and I would like to test for Complete Spatial randomness. I was planning to run Gest (nearest neighbor), Fest (empty space) and Kest (pairwise distances) functions on it.
I am not sure how I should import my data set though. I can use ppp by setting a second dimension to 0, e.g.:
myDistTEST<- data.frame(
col1= sample(x = 1:100, size = 50, replace = FALSE),
col2= paste('Event', 1:50, sep = ''), stringsAsFactors = FALSE)
myDistTEST<- myDistTEST[order(myDistTEST$col1),]
myPPPTest<- ppp(x = myDistTEST[,1], y = replicate(n = 50, expr = 0),
c(1,120), c(0,0))
But I am not sure it is the proper way to format my data. I have also tried to use lpp, but I am not sure how to set the linnet object. What would be the correct way to import my data?
Thank you for your kind attention.
It will be wrong to simply let y=0 for all your points and then proceed as if you had a point pattern in two dimensions. Your suggestion of using lpp is good. Regarding how to define the linnet and lpp try to look at my answer here.
I have considered making a small package to handle one dimensional patterns more easily in spatstat, but so far I have only started the package with a single function to make the definition of the appropriate lpp easier. If you feel adventurous you can install it from the GitHub repo via the remotes package:
remotes::install_github("rubak/spatstat.1d")
The single function you can use is called lpp1. It basically just wraps up the few steps described in the linked answer.

Re-classifying a random matrix

I am brand new to R and in some desperate need of help. I have created a random matrix and need to re-classify it. Each pixel is randomly generated from 0-255 and I need to able to classify the 0-255 digits into 8 classifications. How would I do this? Any help would be greatly appreciated and I have placed my code below. I know I could use a raster but I am unsure on how to use them.
Thanks
par(mar=rep(0,4))
m=matrix(runif(100),10,10)
image(m,axes=FALSE,col=grey(seq(0,1,length=255)))
I didn't think your example adequately fit your description of the problem (since runif only ranges from 0-1 if the limits are not specified) so I modified it to fit the natural language features:
m=matrix(runif(100, 0, 255),10,10)
m[] <- findInterval(m, seq(0, 256, length=8) )
image(m,axes=FALSE,col=grey(seq(0,1,length=255)))
The "[]" with no indices preserves the matrix structure of the m object. The findInterval function lets you do the same sort of binning as cut, but it returns a numeric vector rather than the factor that cut would give.

Graphing results of dbscan in R

Your comments, suggestions, or solutions are/will be greatly appreciated, thank you.
I'm using the fpc package in R to do a dbscan analysis of some very dense data (3 sets of 40,000 points between the range -3, 6).
I've found some clusters, and I need to graph just the significant ones. The problem is that I have a single cluster (the first) with about 39,000 points in it. I need to graph all other clusters but this one.
The dbscan() creates a special data type to store all of this cluster data in. It's not indexed like a data frame would be (but maybe there is a way to represent it as such?).
I can graph the dbscan type using a basic plot() call. But, like I said, this will graph the irrelevant 39,000 points.
tl;dr:
how do I graph only specific clusters of a dbscan data type?
If you look at the help page (?dbscan) it is organized like all others into sections labeled Description, Usage, Arguments, Details and Value. The Value section describes what the function dbscan returns. In this case it is simply a list (a standard R data type) with a few components.
The cluster component is simply an integer vector whose length it equal to the number of rows in your data that indicates which cluster each observation is a member of. So you can use this vector to subset your data to extract only those clusters you'd like and then plot just those data points.
For example, if we use the first example from the help page:
set.seed(665544)
n <- 600
x <- cbind(runif(10, 0, 10)+rnorm(n, sd=0.2), runif(10, 0, 10)+rnorm(n,
sd=0.2))
ds <- dbscan(x, 0.2)
we can then use the result, ds to plot only the points in clusters 1-3:
#Plot only clusters 1, 2 and 3
plot(x[ds$cluster %in% 1:3,])
Without knowing the specifics of dbscan, I can recommend that you look at the function smoothScatter. It it very useful for examining the main patterns in a scatterplot when you otherwise would have too many points to make sense of the data.
The probably most sensible way of plotting DBSCAN results is using alpha shapes, with the radius set to the epsilon value. Alpha shapes are closely related to convex hulls, but they are not necessarily convex. The alpha radius controls the amount of non-convexity allowed.
This is quite closely related to the DBSCAN cluster model of density connected objects, and as such will give you a useful interpretation of the set.
As I'm not using R, I don't know about the alpha shape capabilities of R. There supposedly is a package called alphahull, from a quick check on Google.

Resources