Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The I in PID (Proportional Integral Derivative) is the sum of the last few previous errors, weighted only by it's gain.
Using error(-1) to mean the previous error, error(-2) to mean the error before that etc... 'I' can be described as:
I = (error(-1) + error(-2) + error(-3) + error(-4) etc...) * I_gain
Why when PID was designed was 'I' not instead designed to slope off in importance into the past, for example:
I = (error(-1) + (error(-2) * 0.9) + (error(-3) * 0.81) + (error(-4) * 0.729) + etc...) * I_gain
edit: reworded
The integral term is the sum of ALL the past errors. You simply add the error to the "integrator" at each time step. If this needs to be limited, clamp it to a min or max value if it goes out of range. Then copy this accumulated value to your output and add the proportional and derivative terms and clamp the output again if necessary.
The Derivative term is the difference in the present and previous error (the rate of change in the error). P of course is just proportional to the error.
err = reference - new_measurement
I += kI * err
Derivative = err - old_err
output = I - kD * Derivative + kP * err
old_err = err
And there you have it. Limits omitted of course.
Once the controller reaches the reference value, the error will become zero and the integrator will stop changing. Noise will naturally make it bounce around a bit, but it will stay at the steady state value required to meet your objective, while the P and D terms do most of the work to reduce transients.
Notice that in a steady state condition, the I term is the ONLY thing providing any output. If the control has reached the reference and this requires a non-zero output, it is provided solely by the integrator since the error will be zero. If the I term used weighted errors, it would start to decay back to zero and not sustain the output as needed.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Since we know that 0.1 + 0.2 != 0.3 due to limited number representation, we need to instead check hat abs(0.1+0.2 - 0.3) < ε. The question is, what ε value should we generally choose for different types? Is it possible to estimate it depending on the number of bits and the number and types of operations that are likely to be performed?
A baseline value for epsilon is the difference between 1.0 and the next highest representable value. In C++, this value is available as std::numeric_limits<T>::epsilon().
Note that, at the minimum, you need to scale this value as a proportion of the actual number you're testing. Also, since the precision scales only roughly with the numeric value, you may want to increase your margin by a small factor to prevent spurious errors:
double epsilon = std::numeric_limits<double>::epsilon();
// C++ literals and math functions are double by default
bool is_near = abs(0.1+0.2 - 0.3) <= 0.3 * (2*epsilon);
As a more complete example, a function for comparing doubles:
bool is_approximately_equal(double a, double b) {
double scale = max(abs(a), abs(b));
return abs(a - b) <= scale * (2*epsilon);
}
In practice, the actual epsilon value you should use depends on what you're doing, and what kind of tolerance you actually need. Numeric algorithms will typically have precision tolerances (average and maximum) as well as time and space estimates. But the precision estimate typically starts with something like characteristic_value * epsilon.
You can estimate the machine epsilon using the algorithm below. You need to multiply this epsilon with the integer value of 1+(log(number)/log(2)). After you have determined this value for all numbers in your equation, you can use error analysis to estimate the epsilon value for a specific calculation.
epsilon=1.0
while (1.0 + (epsilon/2.0) > 1.0) {
epsilon = epsilon /2.0
}
//Calculate error using error analysis for a + b
epsilon_equation=Math.sqrt(2*epsilon*epsilon)
document.write('Epsilon: ' + epsilon_equation+'<br>')
document.write('Floating point error: ' + Math.abs(0.2 + 0.4 -0.6)+'<br>')
document.write('Comparison using epsilon: ')
document.write(Math.abs(0.2 + 0.4 -0.6)<epsilon_equation)
Following your comment, I have tried the same approach in C# and it seems to work:
using System;
namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
double epsilon = 1.0;
while (1.0 + (epsilon/2.0) > 1.0)
{
epsilon = epsilon/2.0;
}
double epsilon_equation = Math.Sqrt(2*epsilon*epsilon);
Console.WriteLine(Math.Abs(1.0 + 2.0 - 3.0) < Math.Sqrt(3.0 * epsilon_equation * epsilon_equation));
}
}
}
I am aware of the following approach to exact floating-point predicates computation: calculate the value, using standard floating point types, and calculate the error. Usually, the predicate can be stated as p(x) == 0 or p(x) < 0, etc. If the absolute value of p(x) is greater than the error, the computations are considered exact. Otherwise, interval-based or exact rational arithmetic is used.
It is possible to estimate the error from the expression used. I've heard of automatic generators of this, but failed to find any reference.
As far as I know, exact computations are mainly used for geometry, and googling for "exact geometric computations" gives a lot on the topic.
Here is an article that somehow explains error estimation.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Let's say that there is a variable A that has a Normal distribution N(μ,σ).
I have two probabilities when P(A>a) and P(A<b), where a<b, and the given probability is expressed in %.(as an example)
With this information can R find the standard deviation? I don't know which commands to use? qnorm, dnorm,.. so I can get the Standard Deviation.
What I tried to do was, knowing that a = 100, b = 200 , P(A>a) = 5% and P(A<b)=15%:
Use the standarized Normal distribution considering μ = 0, σ = 1 (But i dont know how to put it in R, so I can get what I want)
See the probability in the normal distribution table and calculate Z ..., but it didn't work.
Is there a way R can find the standard deviation with just these information??
Your problem as stated is impossible, check that your inequalities and values are correct.
You give the example that p(A > 100) = 5% which means that the p( A < 100 ) = 95% which means that p( A < 200 ) must be greater than 95% (all the probability between 100 and 200 adds to the 95%), but you also say that p( A < 200 ) = 15%. There is no set of numbers that can give you a probability that is both greater than 95% and equal to 15%.
Once you fix the problem definition to something that works there are a couple of options. Using Ryacas you may be able to solve directly (2 equations and 2 unkowns), but since this is based on the integral of the normal I don't know if it would work or not.
Another option would be to use optim or similar programs to find(approximate) a solution. Create an objective function that takes 2 parameters, the mean and sd of the normal, then computes the sum of the squared differences between the stated percentages and those computed based on current guesses. The objective function will be 0 at the "correct" mean and standard deviation and positive everywhere else. Then pass this function to optim to find the minimum.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I have a number of melting curves, for which I want to determine the slope of the steepest part between the minimum (valley) and maximum (peak) using R code (the slope in the inflection point corresponds to the melting point). The solutions I can imagine are either to determine the slope in every point and then find the maximum positive value, or by fitting a 4-parameter Weibull-type curve using the drc package to determine the inflection point (basically corresponding to the 50% response point between minimum and maximum). In the latter case the tricky part is that this fitting has to be restricted for each curve to the temperature range between the minimum (valley) and maximum (peak) fluorescence response. These temperature ranges are different for each curve.
Grateful for any feedback!
The diff function accomplishes the equivalent of numerical differentiation on equally spaced values (up to a constant factor) so finding maximum (or minimum) values can be used to identify location of steepest ascent (or descent):
z <- exp(-seq(0,3, by=0.1)^2 )
plot(z)
plot(diff(z))
z[ which(abs(diff(z))==max(abs(diff(z))) )]
# [1] 0.6126264
# could have also tested for min() instead of max(abs())
plot(z)
abline( v = which(abs(diff(z))==max(abs(diff(z))) ) )
abline( h = z[which(abs(diff(z))==max(abs(diff(z))) ) ] )
With an x-difference of 1, the slope is just the difference at that point:
diff(z) [ which(abs(diff(z))==max(abs(diff(z))) ) ]
[1] -0.08533397
... but I question whether that is really of much interest. I would have thought that getting the index (which would be the melting point subject to an offset) would be the value of interest.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let's say I want to predict a dependent variable D, where:
D<-rnorm(100)
I cannot observe D, but I know the values of three predictor variables:
I1<-D+rnorm(100,0,10)
I2<-D+rnorm(100,0,30)
I3<-D+rnorm(100,0,50)
I want to predict D by using the following regression equation:
I1 * w1 + I2 * w2 + I3 * w3 = ~D
however, I do not know the correct values of the weights (w), but I would like to fine-tune them by repeating my estimate:
in the first step I use equal weights:
w1= .33, w2=.33, w3=.33
and I estimate D using these weights:
EST= I1 * .33 + I2 * .33 + I3 *. 33
I receive feedback, which is a difference score between D and my estimate (diff=D-EST)
I use this feedback to modify my original weights and fine-tune them to eventually minimize the difference between D and EST.
My question is:
Is the difference score sufficient for being able to fine-tune the weights?
What are some ways of manually fine-tuning the weights? (e.g. can I look at the correlation between diff and I1,I2,I3 and use that as a weight?
The following command,
coefficients(lm(D ~ I1 + I2 + I3))
will give you the ideal weights to minimize diff.
Your defined diff will not tell you enough to manually manipulate the weights correctly as there is no way to isolate the error component of each I.
The correlation between D and the I's is not sufficient either as it only tells you the strength of the predictor, not the weight. If your I's are truly independent (both from each other, all together and w.r.t. D - a strong assumption, but true when using rnorm for each), you could try manipulating one at a time and notice how it affects diff, but using a linear regression model is the simplest way to do it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm trying to estimate the rate of convergence of a sequence.
background:
u^n+1 = G u_n, where G is a iteration matrix (coming from heat equation).
Fixing dx = 0.1, and setting dt = dx*dx/2.0 to satisfy the a stability constraint
I then do a number of iterations up to time T = 0.1, and calculate the error (analytical solution is known) using max-norm.
This gives me a sequence of global errors, which from the theory should be of the form O(dt) + O(dx^2).
Now, I want to confirm that we have O(dt).
How should I do this?
Relaunch the same code with dt/2 and witness the error being halved.
I think Alexandre C.'s suggestion might need a little refinement (no pun intended) because the global error estimate depends on both Δt and Δx.
So if Δx were too coarse, refining Δt by halving might not produce the expected reduction of halving the error.
A better test might then be to simultaneously reduce Δt by quartering and Δx by halving. Then the global error estimate leads us to expect the error reduced by quartering.
Incidently it is common to plot the global error and "scales" as a log-log graph to estimate the order of convergence.
With greater resources (of time and computer runs) independently varying the time and space discretizations would allow a two-parameter fit (of the same sort of log-log model).
I suck at physics, but simple problems like this, even I can do.
Well, with what do you have problem with?
Calculating rate of the convergence:
If you have series defined as ( Sum[a[n], {n, 1, Infinity}] ), then you need to find location, where the series converges ( L=Limit[a[n], n -> Infinity] ).
Now you can find the rate of the convergence ( μ = Limit[(a[n + 1] - L)/(a[n] - L), n -> Infinity] )
Finding the combined uncertainty with analytical solution
Using the equation:
( Uc =
Sqrt[(D[a, t] Δt)^2 + (D[a, x] Δx)^2] )