How to do an inverse log transformation in R? - r

I am familiar with regular log transformations:
DF1$RT <- log(DF1$RT)
How do I perform an inverse log transformation in R?

The term inverse can be used with different meanings. The meanings are:
reciprocal. In this case the inverse of log(x) is 1/log(x)
inverse function. In this case it refers to solving the equation log(y) = x for y in which case the inverse transformation is exp(x) assuming the log is base e. (In general, the solution is b^x if the log is of base b. For example, if log10(y) = x then the inverse transformation is 10^x.)

Related

Random number simulation in R

I have been going through some random number simulation equations while i found out that as Pareto dosent have an inbuilt function.
RPareto is found as
rpareto <- function(n,a,l){
rp <- l*((1-runif(n))^(-1/a)-1)
rp
}
can someone explain the intuitive meaning behind this.
It's a well known result that if X is a continuous random variable with CDF F(.), then Y = F(X) has a Uniform distribution on [0, 1].
This result can be used to draw random samples of any continuous random variable whose CDF is known: generate u, a Uniform(0, 1) random variable and then determine the value of x for which F(x) = u.
In specific cases, there may well be more efficient ways of sampling from F(.), but this will always work as a fallback.
It's likely (I haven't checked the accuracy of the code myself, but it looks about right) that the body of your function solves f(x) = u for known u in order to generate a random variable with a Pareto distribution. You can check it with a little algebra after getting the CDF from this Wikipedia page.

how do you generate samples from the logistic CDF using the inverse-CDF method

My question is how to generate a sample in R from a logistic CDF with the inverse CDF method. The logistic density is p(θ) = exp(θ)/(1 + exp(θ))^2
Here is the algorithm for that method:
1: for t = 1 to T do
2: sample q(t) ∼ Unif(0, 1)
3: θ(t) ← F^−1(q(t))
4: end for
Here is my code but it just generates a vector of the same number. The result should be log-concave but obviously it would not be that if I put it in the histogram, so what is the problem?:
First define T as the number of draws you're taking from uniform distribution
T<-100000
sample_q<-runif(T,0,1)
It seems like plogis will give you the cumulative distribution function, so I suppose I can just take its inverse:
generate_samples_from_logistic_CDF <- function(p) {
for(t in length(T))
cdf<-plogis((1+exp(p)/(exp(p))))
inverse_cdf<-(1/cdf)
return(inverse_cdf)
}
should generate_samples_from_logistic_CDF(sample_q)
but instead it only gives me the same value for everything
Since the inverse CDF is already coded in R as qlogis(), this should work:
qlogis(runif(100000))
or if you want to do it "by hand" rather than using the built-in qlogis(), you can use R <- runif(100000); log(R/(1-R))
Note that rlogis(100000) should be more efficient.
One of your confusions is that "inverse" in the algorithm description above doesn't mean the multiplicative inverse or reciprocal (i.e. 1/x), but rather the function inverse (which in this case is log(q/(1-q)))

Looking for a way to find the differential equation of an orthogonal regression

I used orthogornal regression in R to find a relation between two variables. I I want to find the tangent in several points on the best fit line.
D(expression (model), "x")
gives me a very unexpected result. I suspect it is because the poly function uses orthogornal polynomiums. In the regression above i get
D(expression(44+ 67*x -5.5*x^2), "x")
which returns me
67 - 5.5 * (2 * x)
It is obviously wrong, "like the coefficients" (know they are not wrong).
x <- c(1,2,3,4,5,6,7,8,9,10)
y <-c(10,15,23,33,46,50,57,63,68,75)
model <- lm( y ~poly(x,2))
Now i want to find the tangent in x=2 and x=7
If i just look at the numbers i suspect that in x=2 the tangent would be something like 6.5? (23-10) / (3-1)
Because it is a second order poly regression, it it makes no sense to input the summary variables i get from the regression, as it gives a meaningless result

Is there a way to get the antilog in R?

I've been given a data set and have inputted the values into R. For the assignment question you need to replicate the following equation: y= 0.08x^0.75.
In order to turn this into an equation that fits into y = Bo + B1x, I took the log10 of both sides using the following code.
fit <- lm(log10(Predator_Biomass)~log10(Prey_Biomass))
summary(fit)
From this I was able to obtain: y = -1.1050 + 0.7450x
Now I've been instructed that I need to take the antilog of both sides so that the Bo value will match 0.08 or be somewhat similar. Is there an antilog function in R that could be helpful to this? Any information would be helpful.
EDIT: Apparently everything that was offered as an answer only took a antilog of the coefficients and not the entire equation. Is there a way to take the antilog of an equation in R?
This is really a math problem more than a computational problem. If you fit a log-log regression as follows:
fit <- lm(log10(Predator_Biomass)~log10(Prey_Biomass))
The underlying equation is
log10(y) = a+b*log10(x)
Raising 10 to both sides gives:
y = 10^(a+b*log10(x)) = 10^a * 10^(b*log10(x)) = 10^a * (10^log10(x))^b
= 10^a * x^b
The parameters a and b are the first and second coefficients of the linear model. If you want to recover the parameters of y = c*x^b you need to antilog the intercept (10^(coef(fit)[1])), but the exponent b should be fine without transformation (coef(fit)[2]).

Quadrature to approximate a transformed beta distribution in R

I am using R to run a simulation in which I use a likelihood ratio test to compare two nested item response models. One version of the LRT uses the joint likelihood function L(θ,ρ) and the other uses the marginal likelihood function L(ρ). I want to integrate L(θ,ρ) over f(θ) to obtain the marginal likelihood L(ρ). I have two conditions: in one, f(θ) is standard normal (μ=0,σ=1), and my understanding is that I can just pick a number of abscissa points, say 20 or 30, and use Gauss-Hermite quadrature to approximate this density. But in the other condition, f(θ) is a linearly transformed beta distribution (a=1.25,b=10), where the linear transformation B'=11.14*(B-0.11) is such that B' also has (approximately) μ=0,σ=1.
I am confused enough about how to implement quadrature for a beta distribution but then the linear transformation confuses me even more. My question is threefold: (1) can I use some variation of quadrature to approximate f(θ) when θ is distributed as this linearly transformed beta distribution, (2) how would I implement this in R, and (3) is this a ridiculous waste of time such that there is an obviously much faster and better method to accomplish this task? (I tried writing my own numerical approximation function but found that my implementation of it, being limited to the R language, was just too slow to suffice.)
Thanks!
First, I assume you can express your L(θ,ρ) and f(θ) in terms of actual code; otherwise you're kinda screwed. Given that assumption, you can use integrate to perform the necessary computations. Something like this should get you started; just plug in your expressions for L and f.
marglik <- function(rho) {
integrand <- function(theta, rho) L(theta, rho) * f(theta)
# set your lower/upper integration limits as appropriate
integrate(integrand, lower=-5, upper=5, rho=rho)
}
For this to work, your integrand has to be vectorized; ie, given a vector input for theta, it must return a vector of outputs. If your code doesn't fit the bill, you can use Vectorize on the integrand function before passing it to integrate:
integrand <- Vectorize(integrand, "theta")
Edit: not sure if you're also asking how to define f(θ) for the transformed beta distribution; that seems rather elementary for someone working with joint and marginal likelihoods. But if you are, then the density of B' = a*B + b, given f(B), is
f'(B') = f(B)/a = f((B' - b)/a) / a
So in your case, f(theta) is dbeta(theta/11.14 + 0.11, 1.25, 10) / 11.14

Resources