Determining slopes with R code [closed] - r

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I have a number of melting curves, for which I want to determine the slope of the steepest part between the minimum (valley) and maximum (peak) using R code (the slope in the inflection point corresponds to the melting point). The solutions I can imagine are either to determine the slope in every point and then find the maximum positive value, or by fitting a 4-parameter Weibull-type curve using the drc package to determine the inflection point (basically corresponding to the 50% response point between minimum and maximum). In the latter case the tricky part is that this fitting has to be restricted for each curve to the temperature range between the minimum (valley) and maximum (peak) fluorescence response. These temperature ranges are different for each curve.
Grateful for any feedback!

The diff function accomplishes the equivalent of numerical differentiation on equally spaced values (up to a constant factor) so finding maximum (or minimum) values can be used to identify location of steepest ascent (or descent):
z <- exp(-seq(0,3, by=0.1)^2 )
plot(z)
plot(diff(z))
z[ which(abs(diff(z))==max(abs(diff(z))) )]
# [1] 0.6126264
# could have also tested for min() instead of max(abs())
plot(z)
abline( v = which(abs(diff(z))==max(abs(diff(z))) ) )
abline( h = z[which(abs(diff(z))==max(abs(diff(z))) ) ] )
With an x-difference of 1, the slope is just the difference at that point:
diff(z) [ which(abs(diff(z))==max(abs(diff(z))) ) ]
[1] -0.08533397
... but I question whether that is really of much interest. I would have thought that getting the index (which would be the melting point subject to an offset) would be the value of interest.

Related

How to calculate the distance between the Best Fit Curve and the data points? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Hello Everyone!
I am fairly new to R programming and hence I have a small doubt regarding the distance (or offset) of the data-set points from their Best-fit Curve.
The given figure shows some points and a Best-fit Curve for those points.
As we can see some points are very far away from the Best-fit curve and I want to write a code which will tell me the distance (or offset) of all the points from the curve. Then I want to display all the points that are far away from the curve.
I have the equation of the curve and all the data points. The curve has an exponential equation.
The uploaded image is just a approximation of the real figure. I drew this one just as an example.
If someone can tell me what method or functions shoul be used here then it would be a big help.
Thank You.
In many R situations you will actually fit the data with a function such as lm or loess or a glm for instance and the model summary will save residuals with the result.
If you indeed have your own equation then you simply want to take those values of x from the data points - calculate the equation y-values, then subtract them from the corresponding data y-values.
e.g. a toy example
# decay function
x= 1:50
start= 80
decay=0.95
equation_y=start*(decay^x)
plot(x,equation_y, type="l")
# simulated data points
data_y = equation_y + rnorm(50, sd=3)
points(x,data_y, col="red")
# the differences
equation_y - data_y

Find the standard deviation with normal probabilities using R [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Let's say that there is a variable A that has a Normal distribution N(μ,σ).
I have two probabilities when P(A>a) and P(A<b), where a<b, and the given probability is expressed in %.(as an example)
With this information can R find the standard deviation? I don't know which commands to use? qnorm, dnorm,.. so I can get the Standard Deviation.
What I tried to do was, knowing that a = 100, b = 200 , P(A>a) = 5% and P(A<b)=15%:
Use the standarized Normal distribution considering μ = 0, σ = 1 (But i dont know how to put it in R, so I can get what I want)
See the probability in the normal distribution table and calculate Z ..., but it didn't work.
Is there a way R can find the standard deviation with just these information??
Your problem as stated is impossible, check that your inequalities and values are correct.
You give the example that p(A > 100) = 5% which means that the p( A < 100 ) = 95% which means that p( A < 200 ) must be greater than 95% (all the probability between 100 and 200 adds to the 95%), but you also say that p( A < 200 ) = 15%. There is no set of numbers that can give you a probability that is both greater than 95% and equal to 15%.
Once you fix the problem definition to something that works there are a couple of options. Using Ryacas you may be able to solve directly (2 equations and 2 unkowns), but since this is based on the integral of the normal I don't know if it would work or not.
Another option would be to use optim or similar programs to find(approximate) a solution. Create an objective function that takes 2 parameters, the mean and sd of the normal, then computes the sum of the squared differences between the stated percentages and those computed based on current guesses. The objective function will be 0 at the "correct" mean and standard deviation and positive everywhere else. Then pass this function to optim to find the minimum.

How do I calculate amplitude and phase angle of fft() output from real-valued input? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have 24 samples from a real-valued signal. I perform the fft() function on the sample and get the complex output. I want to obtain the amplitude and phase angle of each of the non-redundant harmonics. I know my calculation must account for aliasing since I have real-valued data. How do I:
(1) convert from the two-sided to a one-sided Fourier transform,
I've heard several things here. For example, do I multiply the first 12 harmonics (i.e., 2nd through 13th elements of fft() output) by two and drop the rest of the harmonics (i.e., keep 1st through 13th elements of fft() output)?
(2) calculate the amplitude of the one-sided Fourier transform,
I know I can use the Mod() function, but when do I do this? Before or after I convert from two- to one-sided?
(3) calculate the phase angle of the one-sided Fourier transform.
I know I can use the atan() function on the ratio of imaginary to real parts of the fft() output, but again, when do I do this? Before or after two- to one-sided conversion? Also, what if atan is undefined?
Thanks.
Since your input is real the output of the FFT will be symmetric about N / 2 so you can just look at the first N / 2 bins and scale the magnitude by a factor of 2. For the phase you ideally need an atan2 function which takes the real and imaginary components as separate arguments and returns a 4 quadrant result.

What does correlation coefficient actually represent [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What does correlation coefficient intuitively mean? If I have a series of X and then a series of Y, and if I input these two into Weka multilayer perceptron treating Y as the output and X as input, I get a correlation coefficient as 0.76. What does this intuitively represent, and how I explain this to a business man or a non-techie person?
There are several correlation coefficients. The most commonly used, and the one that is referred to as "the one" is Pearson's product moment correlation.
A correlation coefficient shows the degree of linear dependence of x and y. In other words, the coefficient shows how close two variables lie along a line.
If the coefficient is equal to 1 or -1, all the points lie along a line. If the correlation coefficient is equal to zero, there is no linear relation between x and y. however, this does not necessarily mean that there is no relation at all between the two variables. There could e.g. be a non-linear relation.
A positive relationship means that the two variables move into the same direction. A higher value of x corresponds to higher values of y, and vice versa.
A negative relationship means that the two variables move into the opposite directions. A lower value of x corresponds to higher values of y, and vice versa.
Here you have a handful of examples:

Estimating the rate of convergence, finite difference scheme [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm trying to estimate the rate of convergence of a sequence.
background:
u^n+1 = G u_n, where G is a iteration matrix (coming from heat equation).
Fixing dx = 0.1, and setting dt = dx*dx/2.0 to satisfy the a stability constraint
I then do a number of iterations up to time T = 0.1, and calculate the error (analytical solution is known) using max-norm.
This gives me a sequence of global errors, which from the theory should be of the form O(dt) + O(dx^2).
Now, I want to confirm that we have O(dt).
How should I do this?
Relaunch the same code with dt/2 and witness the error being halved.
I think Alexandre C.'s suggestion might need a little refinement (no pun intended) because the global error estimate depends on both Δt and Δx.
So if Δx were too coarse, refining Δt by halving might not produce the expected reduction of halving the error.
A better test might then be to simultaneously reduce Δt by quartering and Δx by halving. Then the global error estimate leads us to expect the error reduced by quartering.
Incidently it is common to plot the global error and "scales" as a log-log graph to estimate the order of convergence.
With greater resources (of time and computer runs) independently varying the time and space discretizations would allow a two-parameter fit (of the same sort of log-log model).
I suck at physics, but simple problems like this, even I can do.
Well, with what do you have problem with?
Calculating rate of the convergence:
If you have series defined as ( Sum[a[n], {n, 1, Infinity}] ), then you need to find location, where the series converges ( L=Limit[a[n], n -> Infinity] ).
Now you can find the rate of the convergence ( μ = Limit[(a[n + 1] - L)/(a[n] - L), n -> Infinity] )
Finding the combined uncertainty with analytical solution
Using the equation:
( Uc =
Sqrt[(D[a, t] Δt)^2 + (D[a, x] Δx)^2] )

Resources