A combination of commands partial_fraction(x) and coefficient(x,n) - sage

I am trying to do some iterative calculations where each time SAGE constructs a fraction and lists the coefficients in the partial fraction decomposition of that fraction. I realized that doing everything symbolically, SAGE wants to keep everything as integer as possible. So what it does is to simplify fractions like 1/(x-3/2) as 2/(2x-3) and then when I ask for
f.coefficient(x-3/2,-1)
it returns 0, while I expect it to return 1.
I have tried to solve things numerically, but there are two problems:
The errors get really big after each iteration
It takes much much longer to calculate it
Any suggestions to get SAGE to solve this is greatly appreciated.

Related

Why do we use log probability in deep learning?

I got curious while reading the paper 'Sequence to Sequence Learning with Neural Networks'.
In fact, not only this paper but also many other papers use log probabilities, is there a reason for that?
Please check the attached photo.
Two reasons -
Theoretical - Probabilities of two independent events A and B co-occurring together is given by P(A).P(B). This easily gets mapped to a sum if we use log, i.e. log(P(A)) + log(P(B)). It is thus easier to address the neuron firing 'events' as a linear function.
Practical - The probability values are in [0, 1]. Hence multiplying two or more such small numbers could easily lead to an underflow in a floating point precision arithmetic (e.g. consider multiplying 0.0001*0.00001). A practical solution is to use the logs to get rid of the underflow.
For any given problem we need to optimise the likelihood of parameters. But optimising the product require all data at once and requires huge computation.
We know that a sum is a lot easier to optimise as the derivative of a sum is the sum of derivatives. So, taking log convert it to sum and makes computation faster.
Refer this

Handling extremely small numbers

I need to find a way to handle extremely small numbers in R, particularly in order to take the log of extremely small numbers. According to the R-manual, “on a typical R platform the smallest positive double is about 5e-324.” Well, I need to deal with numbers even smaller (at least as small as 10^-350). If R is incapable of doing this, I was wondering if there is a way I can use a program that can do this (such as Matlab or Mathematica) from R.
Specifically, I am computing a matrix of probabilities, and some of these probabilities are so small that R does not distinguish them from 0. The reason I know this is because each probability is the product of two other probabilities; so I’ll have p(x)=10^-300, p(y)=10^-50, and then p(x)*p(y)=0. I’d like to be able to do these computations, take the log of the resultant very small number (-805.905 for my example, according to Mathematica), and then continue working with the log values in R.
So to be more detailed, I have a matrix of values for p(x), a matrix of values for p(y), both computed using dnorm, and I’m computing the product. In many cases, R is capable of evaluating p(x) and p(y), but the p(x)*p(y) is too small. In a few cases, though, even the p(x) or p(y) value itself is too small, and is itself just equated to 0 in R.
I’ve seen that there is stuff out there for calling R from Mathematica, but not much pertaining to calling Mathematica from R. I’d honestly prefer to do the latter than the former here. So if any one either knows how to do this (either employing Mathematica or Matlab or something else in R) or has another solution to this issue, I’d greatly appreciate it.
Note that I realize there are a few other threads on this topic, discussing such things as using the Brobdingnag package to deal with small numbers, but these do not appear applicable here.

R function to solve large dense linear systems of equations?

Sorry, maybe I am blind, but I couldn't find anything specific for a rather common problem:
I want to implement
solve(A,b)
with
A
being a large square matrix in the sense that command above uses all my memory and issues an error (b is a vector with corresponding length). The matrix I have is not sparse in the sense that there would be large blocks of zero etc.
There must be some function out there which implements a stepwise iterative scheme such that a solution can be found even with limited memory available.
I found several posts on sparse matrix and, of course, the Matrix package, but could not identify a function which does what I need. I have also seen this post but
biglm
produces a complete linear model fit. All I need is a simple solve. I will have to repeat that step several times, so it would be great to keep it as slim as possible.
I already worry about the "duplication of an old issue" and "look here" comments, but I would be really grateful for some help.

How to improve the precison in such a situation in R?

I want to calculate the value of (1e-6)^84, but in R/R64, the result is 0, which would cause some problem when applying log10 function on it.
Is there anyway to solve this problem?
Depends what problem you are actually trying to solve. Do you care about the value of log(teeenytinynumber)? If not, replace the zero values with NA and keep going. If you do, figure out if there's a better way than following a giant exponent with a log function. Which is to say, simplify your algorithm before crunching numbers.

Minimisation using the nlminb R-function

I would like to find the 4-dimensional vector that minimises some function f which depends on 4 variables. The first three variables take on strictly positive values; the fourth one is unconstrained.
To do this, I would like to use R. I have tried to apply the nlminb function with lower=c(0.001, 0.001, 0.001, -Inf) as one of its optional argument. The procedure does converge but it turns out that the proposed solution does not satisfy the constraint !
I have an alternative solution that consists of using an exponential transformation. However, I would appreciate to figure out why R returns a solution that does not meet my requirements.
Any comment will be appreciated,
Thanks,
Marco
It would be very difficult for me to provide that function here. The reason is that it depends on a number of pre-defined stuff.
Anyway, I am not sure to understand why this occurs but I have realized that my function sometimes returns NaN due to very very large numbers. Actually, I have some doubt about convergence.
On the other hand, I have made some modifications and the alternative solution seems to work well.
As a conclusion, I think that the problem came from my function, not from nlminb.
Best,
Marco

Resources