Optimsimplex: fail in defining a necessary argument - r

I was trying to do a regular simplex (the notion of a triangle or tetrahedron to arbitrary dimensions) to start an optimization set of experiments. The Optimsimplex package provides an easy and useful way to achieve this by using the Spendley method:
library('optimsimplex') #Paquete necesario
Ultra <- optimsimplex(method ='spendley',
x0=c(Vhno3=3,Vh2o2=1,Msample=300,Tsonic=15))
The result Ultra is a optimsimplex class object containing the spatial dimension (n), and the (n) coordenates for each (n+1) vertexes. It is possible to specify a dimension (length) of the simplex by using the len option:
len: The dimension of the simplex. If length is a value, that unique length is used in all directions. If length is a vector with n values, each length is used with the corresponding direction. Only used if method is set to ’axes’ or ’spendley’
But this result on a error that I can not understand:
Ultra <- optimsimplex(method ='spendley',
x0=c(Vhno3=3,Vh2o2=1,Msample=300,Tsonic=15),
len=c(pVhno3=0.5,pVh2o2=0.25,pMsample=50,pTsonic=5))
Error: optimsimplex: The len vector is expected to be a row matrix, but current shape is 1 x 4
So, a 1 x 4 is not a row matrix as {optimsimplex} expected? Could this perhaps correspond to some kind of bug in the package? Thanks in advance.

The problem gets solved by using the new version of optimsimplex package which according to Sebastien Bihorel will be available soon on CRAN but is currently aviable on Optimsimplex-Github

Related

Compare two 2-dim feature vector to find out the similarity of these

i m trying to compare the similarity of two feature vector. In the activation is shape output (60000, 64) and the shape output for the new_activation is (10000, 64). I m looking for a way to find out how many of the vectors inside the new_activation are similar to vectors in the activation. how can i do that?
Thanks in advance
#put all the the training data in the activation layer
activation = feature_activation_model.predict(train_img)
print(activation.shape)
#########
#put the new or old data to compare their feature vectors
new_activation = feature_activation_model.predict(test_img)
print(new_activation.shape)
I'm assuming this is python code, I guess..
Well you need to find out what you mean by similar vector. Here you have a table of 60000 vectors of size 64 in activation (so vector are the lines ?), and a table of 10000 vectors of size 64 in new_activation.
I don't know what is your math problem here so I can't really help, but similarity between vectors could be defined as the norm of their difference. Let u and v be to vector of same size n, than if ||u-v|| is close to machine precision, we could same u and v are pretty much the same vectors

How to get a value of a multi-dimensional array by an INCOMPLETE vector of indices

This question is very similar to
R - how to get a value of a multi-dimensional array by a vector of indices
I have:
dim_count <- 5
dims <- rep(3, dim_count)
pi <- array(1:3^5, dims)
I want to get an entire line, but with an automatic building of the address of this line.
For example, I would like to get:
pi[1,,2,2,3]
## [1] 199 202 205
You could insert a sequence covering the whole dimension in the appropriate slot:
do.call("[",list(pi,1,1:dim(pi)[2],2,2,3))
By the way, defining a variable called pi is a little dangerous (I know this was inherited from the previous question) -- suppose you tried a few lines later to compute the circumference of a circle as pi*diameter ...

Calculating Cosine Similarity of two Vectors of Different Size

I have 2 questions,
I've made a vector from a document by finding out how many times each word appeared in a document. Is this the right way of making the vector? Or do I have to do something else also?
Using the above method I've created vectors of 16 documents, which are of different sizes. Now i want to apply cosine similarity to find out how similar each document is. The problem I'm having is getting the dot product of two vectors because they are of different sizes. How would i do this?
Sounds reasonable, as long as it means you have a list/map/dict/hash of (word, count) pairs as your vector representation.
You should pretend that you have zero values for the words that do not occur in some vector, without storing these zeros anywhere. Then, you can use the following algorithm to compute the dot product of these vectors (pseudocode):
algorithm dot_product(a : WordVector, b : WordVector):
dot = 0
for word, x in a do
y = lookup(word, b)
dot += x * y
return dot
The lookup part can be anything, but for speed, I'd use hashtables as the vector representation (e.g. Python's dict).

How to store a polynomial?

Integers can be used to store individual numbers, but not mathematical expressions. For example, lets say I have the expression:
6x^2 + 5x + 3
How would I store the polynomial? I could create my own object, but I don't see how I could represent the polynomial through member data. I do not want to create a function to evaluate a passed in argument because I do not only need to evaluate it, but also need to manipulate the expression.
Is a vector my only option or is there a more apt solution?
A simple yet inefficient way would be to store it as a list of coefficients. For example, the polynomial in the question would look like this:
[6, 5, 3]
If a term is missing, place a zero in its place. For instance, the polynomial 2x^3 - 4x + 7 would be represented like this:
[2, 0, -4, 7]
The degree of the polynomial is given by the length of the list minus one. This representation has one serious disadvantage: for sparse polynomials, the list will contain a lot of zeros.
A more reasonable representation of the term list of a sparse polynomial is as a list of the nonzero terms, where each term is a list containing the order of the term and the coefficient for that order; the degree of the polynomial is given by the order of the first term. For example, the polynomial x^100+2x^2+1 would be represented by this list:
[[100, 1], [2, 2], [0, 1]]
As an example of how useful this representation is, the book SICP builds a simple but very effective symbolic algebra system using the second representation for polynomials described above.
A list is not the only option.
You can use a map (dictionary) mapping the exponent to the corresponding coefficient.
Using a map, your example would be
{2: 6, 1: 5, 0: 3}
A list of (coefficient, exponent) pairs is quite standard. If you know your polynomial is dense, that is, all the exponent positions are small integers in the range 0 to some small maximum exponent, you can use the array, as I see Óscar Lopez just posted. :)
You can represent expressions as Expression Trees. See for example .NET Expression Trees.
This allows for much more complex expressions than simple polynomials and those expressions can also use multiple variables.
In .NET you can manipulate the expression tree as a tree AND you can evaluate it as a function.
Expression<Func<double,double>> polynomial = x => (x * x + 2 * x - 1);
double result = polynomial.Compile()(23.0);
An object-oriented approach would say that a Polynomial is a collection of Monomials, and a Monomial encapsulates a coefficient and exponent together.
This approach works when when you have a polynomial like this:
y(x) = x^1000 + 1
An approach that tied a data structure to a polynomial order would be terribly wasteful for this pathological case.
You need to store two things:
The degree of your polynomial (e.g. "3")
A list containing each coefficient (e.g. "{3, 0, 2}")
In standard C++, "std::vector<>" and "std::list<>" can do both.
Vector/array is obvious choice. Depending on type of expressions you may consider some sort of sparse vector type (custom made, i.e. based on dictionary or even linked list if you expressions have 2-3 non-zero coefficients 5x^100+x ).
In either case exposing through custom class/interface would be beneficial as you can replace implementation later. You would likely want to provide standard operations (+, -, *, equals) if you plan to write a lot of expression manipulation code.
Just store the coefficients in an array or vector. For example, in C++ if you are only using integer coefficients, you could use std::vector<int>, or for real numbers, std::vector<double>. Then you just push the coefficients in order and access them by variable exponent number.
For example (again in C++), to store 5*x^3 + 9*x - 2 you might do:
std::vector<int> poly;
poly.push_back(-2); // x^0, acceesed with poly[0]
poly.push_back(9); // x^1, accessed with poly[1]
poly.push_back(0); // x^2, etc
poly.push_back(5); // x^3, etc
If you have large, sparse, polynomials, then maybe you'd want to use a map instead of a vector. If you have fixed sized lengths, then you'd perhaps use an fixed length array instead of a vector.
I've used C++ for examples, but this same scheme can be used in any language.
You can also transform it into reverse Polish notation:
6x^2 + 5x + 3 -> x 2 ^ 6 * x 5 * + 3 +
Where x and numbers are "pushed" onto a stack and operations (^,*,+) take the two top-most values from the stack and replace them with the result of the operation. In the end you get the resultant value on the stack.
In this form it's easy to calculate arbitrarily complex expressions.
This representation is also close to tree representation of expressions where non-leaf tree nodes represent operations and functions and leaf nodes are for constants and variables.
What's good about trees is that you can also easily evaluate expressions and you can also do things like symbolic differentiation on them. Both have recursive nature.

Make a matrix full-ranked?

How can I turn a regular matrix into a matrix full-ranked in R? Is there an available method for that?
I have a matrix that may have linearly dependent columns and I need to
pass it to a function that requires its argument to be a matrix with
full rank. Since linearly dependent columns are not of interest
anyway, I am looking for a function that removes such columns until
the matrix is full rank. There may be several solutions of course, but
any one of them should be fine.
Right now I am just constructing the matrix column by column and only
add a column if its the resulting matrix is still fullrank, but it
feels like there should be a better way to do this.
Another approach is to minimize |y - Ax|2 + c |x|2,
by tacking an identity matrix on to A and zeros to y.
The parameter c (a.k.a. λ)
trades off fitting y - Ax, and keeping |x| small.
Then run a second fit with the r largest components of x,
r = rank(A) (or any number you please).

Resources