If we have a multivariate polynomial in SAGE for instance
f=3*x^3*y^2+x*y+3
how can i display the full list of coefficients including the zero ones from missing terms between maximum dregree term and constant.
P.<x,y> = PolynomialRing(ZZ, 2, order='lex')
f=3*x^2*y^2+x*y+3
f.coefficients()
gives me the list
[3, 1, 3]
but i'd like the "full" list to put into a a matrix. In the above example it should be
[3, ,0 , 0, 1, 0, 0, 0, 0, 3]
corresponding to terms:
x^2*y^2, x^2*y, x*y^2, x*y, x^2, y^2, x, y, constant
Am I missing something?
Your desired output isn't quite well defined, because the monomials you listed are not in the lexicographic order (which you used in the first line of your code). Anyway, using a double loop you can arrange coefficients in any specific way you want. Here is a natural way to do this:
coeffs = []
for i in range(f.degree(x), -1, -1):
for j in range(f.degree(y), -1, -1):
coeffs.append(f.coefficient({x:i, y:j}))
Now coeffs is [3, 0, 0, 0, 1, 0, 0, 0, 3], corresponding to
x^2*y^2, x^2*y, x^2, x*y^2, x*y, x, y, constant
The built-in .coefficients() method is only useful if you also use .monomials() which provides a matching list of monomials that have those coefficients.
Related
I have a dataframe like df, with a dimension of 10,000 x 40,000 (this matrix has a lot of 0's):
value1 <- c(1, 0, 3, 0, 0, 2)
value2 <- c(0.8, 0.1, 9, 0, 0, 5)
value3 <- c(8, 3, 0, 0, 0, 0)
df <- data_frame(value1, value2, value3)
I want to calculate the covariance matrix of df.
I have tried to use bigcor() and I have also tried to calculate the covariance matrix of a sparse matrix (Running cor() (or any variant) over a sparse matrix in R).
However, R session aborts.
Any help?
I am trying to build a model of points in the space, where each point has constraints with other points (which means that if point a and b has a constraint of 5, then the distance between them must be exactly 5).
is a basic model, where the green is the nodes, and the red are the constraints.
I need to find the x1,y1,x2,y2,x3,y3.
The model receive a matrix of constraints.
In the case of the model above, the matrix will be:
[[0, 4 -1]
[ 4, 0, 5],
[-1, 5, 0]]
now, when the model is easy, This is an easy task.
But when adding more constrains, like this model,
that will get the matrix :
[[0, 4 -1, 4]
[ 4, 0, 5, -1],
[-1, 5, 0, 5],
[4, -1, 5, 0]]
Does anyone have an idea how to create this model when the input is a matrix of constraints?
Is there a way of extracting the functional form of the likelihood function that gets formed by the msm function in R?
How can I extract the likelihood function that gets formed in the example below? I want to try and implement my own version of the quasi-Newton maximisation algorithm to improve my understanding.
library(msm)
# look at transition counts
statetable.msm(state, PTNUM, data = cav)
# define transition intensity matrix
# 1's mean a transition can occur
# 0's mean a transition should not occur
# any number can be placed on the diagonal as R overwrites the diagonals
# prior to maximising
q <- rbind(
c(0, 1, 0, 1),
c(1, 0, 1, 1),
c(0, 1, 0, 1),
c(0, 0, 0, 0)
)
# fit msm to the data
# the fnscale rescales the likelihood to prevent overflow
msm.fit <- msm(state ~ years, PTNUM, data = cav, qmatrix = q, control=list(fnscale=4000))
The model is:
model <- glm(DW ~ P + DV_1, family = "binomial")
The variables are:
DW <- c(1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1)
P <- c(18.584898, 8.177430, -7.392020, -13.123626, 11.742363, 35.836419, 8.177430, 8.177430, 7.209096, 10.398933, -23.382043, -8.177430, 7.392020, 17.607980, -37.631207, -8.177430, 12.202439, -29.602930, -8.177430, 14.709837, 8.194932, 8.177430, -5.222738, 1.185302, 12.049662, 6.193046)
DV_1 <- c(45.49215, 55.40000, 51.63815, 36.12306, 34.78324, 41.17867, 59.14783, 62.45898, 55.04072, 53.76998, 52.31764, 44.71056, 42.23566, 50.08676, 61.34397, 49.59538, 38.21099, 51.05214, 44.69676, 40.83045, 46.09846, 53.45508, 54.73643, 50.26476, 48.75601, 53.68885)
If I try to obtain confidence interval, for each parameter, with confint I get these warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
For this specific model:
Is it better to use confint or other functions?
How do I fix that warning?
Can I get reliable confidence interval, and how?
Thanks in advance
You can always calculate confidence intervals as this in glm, without having to rely on any type of commands:
exp(confint.default(model))
You can always use the bayesian approach recommended by Sotos. Depends on rely what you want to do. If this is like a HW question telling you to just do a glm model and confidence intervals then the above command will help you out.
Also if you run the glm(), with only one parameter and then use the confint command you will not get any warnings.
I have the following nonlinear contrained optimization problem, that im am solving in R using solnp:
max F(w)
w
s.t.
w_i >= 0 for all i
sum(w) = 1
However, I would like to add an extra constraint but i'm not sure it is even possible. I would like all the w's bigger than 0 to have equal weights. Something like:
max F(w)
w
s.t.
w_i >= 0 for all i
sum(w) = 1
w_i=w_j for all i,j where w_i,w_j>0
Does anyone if it is possible, and if so, how to do it?
I am not sure if this is a necessarily hard optimization problem given that your search space is completely determined. Essentially, given finite number of dimensions for w_i, you have a finite number of points in the R^w space that you want to search over. These are:
c(1, 0, 0, ..., 0)
c(0, 1, 0, ..., 0)
...
c(0, 0, ..., 1)
c(1/2, 1/2, 0, ..., 0)
c(1/2, 0, 1/2, ..., 0)
...
c(1/3, 1/3, 1/3, 0, ..., 0)
...
c(1/n, 1/n, ..., 1/n)
so on. You get the idea.
Which means that you can just evaluate your function over these points and pick the combination which maximizes F.
Does that sound about right or have I missed something critical?