I want to generate events that "arrive" with "t" seconds as mean inter-arrival delay? Starting at time 0, how do I generate the times when the event occurs? Basically, I want to generate sequence of times such as t1, t2, t3, ... when the event occurs. How do I write such a function?
Thank you.
You don't say what language - but take a look at Generate (Poisson?) random variable in real-time
The easiest solution is to compute the time of the next event based
on the "L" inter-arrival delay. This is based on the cumulative
distribution function for the exponential: F(x) = 1 - e**(-lambda * x)
where lambda is 1/L, the mean time, and x is the amount of time.
This can be solved for x and fed with a uniform random number:
x = -ln(1-U)/lambda where U is a random value 0..1.
From the link 1:
#include <math.h>
#include <stdlib.h>
float nextTime(float rateParameter) {
return -logf(1.0f - (float) random() / (RAND_MAX + 1)) / rateParameter;
}
This link provides a lot of information on how to do it plus examples in
How to Generate Random Timings for a Poisson Process
Note that there are other probability distribution functions that can be used
for event generation (uniform, triangle, etc.). Many of these can be
generated either by code from Boost or by using GNU Scientific Library (GSL).
So to compute times of events:
next_event = time() + nextTime(D);
following_event = next_event + nextTime(D);
If events have a duration, the duration can be another, independent
Poisson distribution, random distribution, fixed interval, etc. However,
will need to check that the interval to the next event is not shorter
than the duration of the event you are simulating:
deltaT = nextTime(MEAN_EVT);
dur = nextTime(MEAN_DUR);
if (deltaT <= dur) {
// either fix duration or get another event....
}
Python contains random.expovariate which makes this very easy in Python. For example to create 10 samples:
import random
random.expovariate(0.2) for i in range(10)]
Typically, this will be converted to integer:
import random
[int(random.expovariate(0.2)) for i in range(10)]
Thanks to this link.
Related
In my dataset, I have ants that switch between one state (in this case a resting state) and all other states over a period of time. I am attempting to fit an exponential distribution to the number of times an ant spends in a resting state for some duration of time (for instance, the ant may rest for 5 seconds 10 times, or it could rest for 6 seconds 5 times, etc.). While subjectively this distribution of durations seems to be exponential, I can't fit a single parameter exponential distribution (where the one parameter is rate) to the data. Is this possible to do with my dataset, or do I need to use a two parameter exponential distribution?
I am attempting to fit the data to the following equation (where lambda is rate):
lambda * exp(-lambda * x).
This, however, doesn't seem to be mathematically possible to fit to either the counts of my data or the probability density of my data. In R I attempt to fit the data with the following code:
fit = nls(newdata$x.counts ~ (b*exp(b*newdata$x.mids)), start =
list(x.counts = 1, x.mids = 1, b = 1))
When I do this, though, I get the following message:
Error in parse(text= x, keep.source = FALSE):
<text>:2:0: unexpected end of input
1: ~
^
I believe I am getting this because its mathematically impossible to fit this particular equation to my data. Am I correct in this, or is there a way to transform the data or alter the equation so I can make it fit? I can also make it fit with the equation lambda * exp(mu * x) where mu is another free parameter, but my goal is to make this equation as simple as possible, so I would prefer to use the one parameter version.
Here is the data, as I can't seem to find a way to attach it as a csv:
https://docs.google.com/spreadsheets/d/1euqdgHfHoDmQKXHrtOLcn5x5o81zY1sr9Kq6NCbisYE/edit?usp=sharing
First, you have a typo in your formula, you forgot the - sign in
(b*exp(b*newdata$x.mids))
But this is not what is throwing the error. The start parameter should be a list that initializes only the parameter value, not x.counts nor x.mids.
So the correct version would be:
fit = nls(newdata$x.counts ~ b*exp(-b*newdata$x.mids), start = list(b = 1))
My code for getting a proper plot in R does not seem to work (I am new to R and I am having difficulties with coding).
Basically, using the concept of temporal discounting in the form of beta-delta model, we are supposed to calculate the subjective value for $10 at every delay from 0 to 365.
The context of the homework is that we have to account for the important exception that if a reward is IMMEDIATE, there’s no discount, but if it occurs at any delay there’s both an exponential discount and a delay penalty.
I created a variable called BetaDeltaValuesOf10, which is 366 elements long and represents the subjective value for $10 at each delay from 0 to 365.
The code needs to have the following properties using for-loops and an if-else statement:
1) IF the delay is 0, the subjective value is the objective magnitude (and should be saved to the appropriate element of BetaDeltaValuesOf10.
2) OTHERWISE, calculate the subjective value at the exponentially discounted rate, assuming 𝛿 = .98 and apply a delay penalty of .8, then save it to the appropriate element of BetaDeltaValuesOf10.
The standard code given to us to help us in creating the code is as follows:
BetaDeltaValuesOf10 = 0
Delays = 0:365
Code(Equation) to get subjective value/preference using exponential discounting model:
ExponentialDecayValuesOf10 = .98^Delays*10
0.98 is the discount rate which ranges between 0 and 1.
Delays is the number of time periods in the future when the later reward will be delivered
10 is the subjective value of $10
Code(Equation) to get subjective value using beta-delta model:
0.8*0.98^Delays*10
0.8 is the delay penalty
The code I came up with in trying to satisfy the above mentioned properties is as follows:
for(t in 1:length(Delays)){BetaDeltaValuesOf10 = 0.98^0*10
if(BetaDeltaValuesOf10 == 0){0.98^t*10}
else {0.8*0.98^t*10}
}
So, I tried the code and did not get any error. But, when I try to plot the outcome of the code, my plot comes up blank.
To plot I used the code:
plot(BetaDeltaValuesOf10,type = 'l', ylab = 'DiscountedValue')
I believe that my code is actually faulty and that is why I am not getting a proper outcome for my plot.
Please let me know of the amendments to the code and if the community needs any clarification, I will try to clarify as soon as I can.
result <- double(length=366)
delays <- 0:365
val <- 10
delta <- 0.98
penalty <- 0.8
for(t in seq_along(delays)) {
result[t] <- val * delta^delays[t] * penalty^(delays[t]>0)
}
plot(x=delays, y=result, pch=20)
I am in the Go environment.
I am looking for a cross platform library to use to generate my two formulas in python or F # or matlab, ...
I need to generate a mathematical formula based on two references
The manufacturer indicates that the value of a sensor is coded on a byte and is between 0 and 255.
The minimum = 0 and has the representation value -60dB
The maximum = 255 and has a representation value of +20dB
I must now generate two formulas:
RX: a mathematical formula allowing me to interpret the value coming from the sensor in value of representation in dB.
TX: the inverse of RX ie a mathematical formula allowing me to convert the value of representation in dB in value of representation of the sensor.
If you have a idea it is welcome
Youssef
I am assuming you need a linear relationship, so you can use the following code:
INPUT_MIN = 0
INPUT_MAX = 255
OUTPUT_MIN = -60
OUTPUT_MAX = 20
SLOPE = (OUTPUT_MAX - OUTPUT_MIN) / (INPUT_MAX - INPUT_MIN)
def rx(sensor_input):
return SLOPE * (sensor_input - INPUT_MIN) + OUTPUT_MIN
def tx(dbs):
return (dbs - OUTPUT_MIN) / SLOPE + INPUT_MIN
What you have to do is to find the equation of the line given those two points. There are many tutorials online about it like this one.
Once you have found the equation in which y would be the variable that represents your output, and x represent your input, you need to find x in terms of y. Finally, you just implement both functions.
Note that I haven't limited the input, so in case you want restricted input values, I encourage you to add some conditionals in the functions.
In python using numpy:
import numpy as np
def RX(input_val):
# use linspace to create lookup table
lookup_array = np.linspace(-60,20,255)
return lookup_array(int(input_val))
def TX(decibal_value):
# use linspace to create lookup table
lookup_array = np.linspace(-60,20,255)
# find the index closest to decibal value by searching for the smallest difference
index = (np.abs(spectum - decibal_value)).argmin()
return index
I have a stream of data that trends over time. How do I determine the rate of change using C#?
It's been a long time since calculus class, but now is the first time I actually need it (in 15 years). Now when I search for the term 'derivatives' I get financial stuff, and other math things I don't think I really need.
Mind pointing me in the right direction?
If you want something more sophisticated that smooths the data, you should look into a a digital filter algorithm. It's not hard to implement if you can cut through the engineering jargon. The classic method is Savitzky-Golay
If you have the last n samples stored in an array y and each sample is equally spaced in time, then you can calculate the derivative using something like this:
deriv = 0
coefficient = (1,-8,0,8,-1)
N = 5 # points
h = 1 # second
for i range(0,N):
deriv += y[i] * coefficient[i]
deriv /= (12 * h)
This example happens to be a N=5 filter of "3/4 (cubic/quartic)" filter. The bigger N, the more points it is averaging and the smoother it will be, but also the latency will be higher. You'll have to wait N/2 points to get the derivative at time "now".
For more coefficients, look here at the Appendix
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
You need both the data value V and the corresponding time T, at least for the latest data point and the one before that. The rate of change can then be approximated with Eulers backward formula, which translates into
dvdt = (V_now - V_a_moment_ago) / (T_now - T_a_moment_ago);
in C#.
Rate of change is calculated as follows
Calculate a delta such as "price minus - price 20 days ago"
Calculate rate of change such as "delta / price 99 days ago"
Total rate of change, i.e. (new_value - original_value)/time?
I am trying to determine the volatility of a rank.
More specifically, the rank can be from 1 to 16 over X data points (the number of data points varies with a maximum of 30).
I'd like to be able to measure this volatility and then map it to a percentage somehow.
I'm not a math geek so please don't spit out complex formulas at me :)
I just want to code this in the simplest manner possible.
I think the easiest first pass would be Standard Deviation over X data points.
I think that Standard Deviation is what you're looking for. There are some formulas to deal with, but it's not hard to calculate.
Given that you have a small sample set (you say a maximum of 30 data points) and that the standard deviation is easily affected by outliers, I would suggest using the interquartile range as a measure of volatility. It is a trivial calculation and would give a meaningful representation of the data spread over your small sample set.
If you want something really simple you could take the average of the absolute differences between successive ranks as volatility. This has the added bonus of being recursive. Us this for initialisation:
double sum=0;
for (int i=1; i<N; i++)
{
sum += abs(ranks[i]-ranks[i-1]);
}
double volatility = sum/N;
Then for updating the volatility if a new rank at time N+1 is available you introduce the parameter K where K determines the speed with which your volatility measurement adapts to changes in volatility. Higher K means slower adaption, so K can be though of as a "decay time" or somesuch:
double K=14 //higher = slower change in volatility over time.
double newvolatility;
newvolatility = (oldvolatility * (K-1) + abs(rank[N+1] - rank[N]))/K;
This is also known as a moving average (of the absolute differences in ranks in this case).