Coming from Tensorflow and Pytorch, does Flux.jl contain a tensor like structure? If not, what is the common way to structure your data?
From the Flux.jl docs:
The starting point for all of our models is the Array (sometimes referred to as a Tensor in other frameworks). This is really just a list of numbers, which might be arranged into a shape like a square.
So given this, the way to represent data is just via traditional matrices (which are just arrays). You can find out more about Julia's first class array support here: https://docs.julialang.org/en/v1/manual/arrays/
Related
Can any one suggest me a real example of dimension reduction using any model like PCA,ICA or others of Hyper spectral image with R or Python language.
Dimension reduction is critical when dealing with hyperspectral data and it is quite easily implementable in python.
Import the spectral python library and use principal_components function on your data to get the PCA result.
As for example, you should check out this
I would like to define a slightly more general version of a complex number in R. This should be a vector that has more than one component, accessible in a similar manner to using Re() and Im() for complex numbers. Is there a way to do this using S3/S4 classes?
I have read through the OO field guide among other resources, but most solutions seem focused around the use of lists as fundamental building objects. However, I need vectors for use in data.frames and matrices. I was hoping to use complex numbers as a template, but they seem to be implemented largely in C. At this point, I don't even know where to start.
I'm generating a self-organizing map in R using the kohonen package. However, when looking at the documentation, I cannot find a clear understanding of what the codes property of the som object represents.
The documentation only states:
codes: a matrix of code vectors.
What is a matrix of code vectors in this context?
If it works like other SOM packages do, I believe the code value you mention refers to codebook vectors. Here's a good resource that explains how those work:
The codebook vectors themselves represent prototypes (points) within
the domain, whereas the topological structure imposes an ordering
between the vectors during the training process.
From http://www.cleveralgorithms.com/nature-inspired/neural/som.html
I would recommend reading the original paper that accompanied the kohnen package, which you can find here: https://www.jstatsoft.org/article/view/v021i05/v21i05.pdf
It provides quite a bit more detail than the R-docs.
does anybody familiar with a way that I could implement a matrix with values from a field (not the real or complex number, but lets say Z mod p). so I could perform all the operation of matlab on the matrix (with the values of the chosen field)
Ariel
I suspect that you will want to use Matlab's object-oriented capabilities so that you can define both the fundamental representation of the elements of your field, and the basic operations on them. There's a reasonably good, if elementary, example of implementing polynomials using Matlab's OO features in the product documentation. That might be a good place for you to start.
I am trying to build a data processing program. Currently I use a double matrix to represent the data table, each row is an instance, each column represents a feature. I also have an extra vector as the target value for each instance, it is of double type for regression, it is of integer for classification.
I want to make it more general. I am wondering what kind of structure R uses to store a dataset, i.e. the internal implementation in R.
Maybe if you inspect the rpy2 package, you can learn something about how data structures are represented (and can be accessed).
The internal data structures are `data.frame', a detailed introduction to the data frame can be found here.
http://cran.r-project.org/doc/manuals/R-intro.html#Data-frames