I'm new to julia and trying to make a simple script to simulate population growth. So at each time-step the population grows as follows N(t+1)=N(t)(1+beta). So at each time step I sample from a poisson distribution with mean given by N(t+1). I would like to stop when either N reaches a maximum value or reaches 0. I've implemented this in Julia but the population often goes further than the maximum value i define. Additionally any time the N->0 i get an error message : ErrorException("lambda must be positive").
using Distributions
function new_pop(N)
beta=0.1
w_fit=1
rand(Poisson(N*(1+w_fit*beta)))
end
pop_S=10
pop_Max=100
while (pop_S<pop_Max | pop_S>0)
pop_S=new_pop(pop_S)
println(pop_S)
end
I think you might want || rather than |. A single bar does bitwise OR, whereas two bars is logical OR.
Related
I got some problems with this homework which I have totally no idea, never got into this field before and I really need some help.
First, we have a wiener process like
Which means the probability of the process drops beneath -3 within the time interval [0,1].
Now the thing is we have to simulate the process by discretize it.
1.Suppose we first discretize the process by 100 points and simulate 10,000 process in this way.
i.e., W(0.01), W(0.02), …., W(1.00).
Note that W(t) – W(t-0.01) ~ N(0,0.01) independently.
2.Using the data obtained at 1., we approximate
by
what is the relationship between this value and the real
(larger, equal to or smaller)?
3.Repeat 1. and 2. by cutting [0,1] into 10,000 points instead. Will the
resulting probability increases or decreases?
I am using the finite difference scheme to find gradients.
Lets say i have 2 outputs (y1,y2) and 1 input (x) in a single component. And in advance I know that the sensitivity of y1 with respect to x is not same as the sensitivity of y2 to x. And thus i could potentially have two different steps for those as in ;
self.declare_partials(of=y1, wrt=x, method='fd',step=0.01, form='central')
self.declare_partials(of=y2, wrt=x, method='fd',step=0.05, form='central')
There is nothing that stops me (algorithmically) but it is not clear what would openmdao gradient calculation exactly do in this case?
does it exchange information from the case where the steps are different by looking at the steps ratios or simply treating them independently and therefore doubling computational time ?
I just tested this, and it does the finite difference twice with the two different step sizes, and only saves the requested outputs for each step. I don't think we could do anything with the ratios as you suggested, as the reason for using different stepsizes to resolve individual outputs is because you don't trust the accuracy of the outputs at the smaller (or large) stepsize.
This is a fair question about the effect of the API. In typical FD applications you would get only 1 function call per design variable for forward and backward difference and 2 function calls for central difference.
However in this case, you have asked for two different step sizes for two different outputs, both with central difference. So here, you'll end up with 4 function calls to compute all the derivatives. dy1_dx will be computed using the step size of .01 and dy2_dx will be computed with a step size of .05.
There is no crosstalk between the two different FD calls, and you do end up with more function calls than you would have if you just specified a single step size via:
self.declare_partials(of='*', wrt=x, method='fd',step=0.05, form='central')
If the cost is something you can bear, and you get improved accuracy, then you could use this method to get different step sizes for different outputs.
I'm writing a javascript program that sends a list of MIDI signals over a specified period of time.
If the signals are sent evenly, it's easy to determine how long to wait in between each signal: it's just the total duration divided by the number of signals.
However, I want to be able to offer a setting where the signals aren't sent equally: either the signals are sent with increasing or decreasing speed. In either case, the number of signals and the total amount of time remain the same.
Here's a picture to visualize what I'm talking about
Is there a simple logarithmic/exponential function where I can compute what these values are? I'm especially hoping it might be possible to use the same equation for both, simply changing a variable.
Thank you so much!
Since you do not give any method to get a pulse value, from the previous value or any other way, I assume we are free to come up with our own.
In both of your cases, it looks like you start with an initial time interval: let's call it a. Then the next interval is that value multiplied by a constant ratio: let's call it r. In the first decreasing case, your value of r is between zero and one (it looks like around 0.6), while in the second case your value of r is greater than one (around 1.6). So your time intervals, in Python notation, are
a, a*r, a*r**2, a*r**3, ...
Then the time of each signal is the sum of a geometric series,
a * (1 - r**n) / (1 - r)
where n is the number of the pulse (1 for the first, 2 for the second, etc.). That formula is valid if r is not one, but if r is one then the sequence is a trivial sequence of a regular signal and the nth signal is given at time
a * n
This is not a "fixed result" since you have two degrees of freedom--you can choose values of a and of r.
If you want to spread the signals more evenly, just bring r closer to one. A value of one is perfectly even, a value farther from one is more clumped at one end. One disadvantage of this method is that if the signal intervals are decreasing then the signals will completely stop at some point, namely at
a / (1 - r)
If you have signals already sent or received and you want to find the value of r, just find the time interval between three consecutive signals, and r is the value of the time interval between the 2nd and 3rd signal divided by the time interview between the 1st and 2nd signal. If you want to see if this model is a good one for a given set of signals, check the value of r at multiple signals--if the value of r is nearly constant then this is a good model.
Here's some pseudocode:
count = 0
for every item in a list
1/20 chance to add one to count
This is more or less my current code, but there could be hundreds of thousands of items in that list; therefore, it gets inefficient fast. (isn't this called like, 0(n) or something?)
Is there a way to compress this into one equation?
Let's look at the properties of the random variable you've described. Quoting Wikipedia:
The binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p.
Let N be the number of items in the list, and C be a random variable that represents the count you're obtaining from your pseudocode. C will follow a binomial probability distribution (as shown in the image below), with p = 1/20:
The remaining problem is how to efficently poll a random variable with said probability distribution. There are a number of libraries that allow you to draw samples from random variables with a specified PDF. I've never had to implement it myself, so I don't exactly know the details, but many are open source and you can refer to the implementation for yourself.
Here's how you would calculate count with the numpy library in Python:
n, p = 10, 0.05 # 10 trials, probability of success is 0.05
count = np.random.binomial(n, p) # draw a single sample
Apparently the OP was asking for a more efficient way to generate random numbers with the same distribution this will give. I though the question was how to do the exact same operation as the loop, but as a one liner (and preferably with no temporary list that exists just to be iterated over).
If you sample a random number generator n times, it's going to have at best O(n) run time, regardless of how the code looks.
In some interpreted languages, using more compact syntax might make a noticeable difference in the constant factors of run time. Other things can affect the run time, like whether you store all the random values and then process them, or process them on the fly with no temporary storage.
None of this will allow you to avoid having your run time scale up linearly with n.
I need to write a function that returns on of the numbers (-2,-1,0,1,2) randomly, but I need the average of the output to be a specific number (say, 1.2).
I saw similar questions, but all the answers seem to rely on the target range being wide enough.
Is there a way to do this (without saving state) with this small selection of possible outputs?
UPDATE: I want to use this function for (randomized) testing, as a stub for an expensive function which I don't want to run. The consumer of this function runs it a couple of hundred times and takes an average. I've been using a simple randint function, but the average is always very close to 0, which is not realistic.
Point is, I just need something simple that won't always average to 0. I don't really care what the actual average is. I may have asked the question wrong.
Do you really mean to require that specific value to be the average, or rather the expected value? In other words, if the generated sequence were to contain an extraordinary number of small values in its initial part, should the rest of the sequence atempt to compensate for that in an attempt to get the overall average right? I assume not, I assume you want all your samples to be computed independently (after all, you said you don't want any state), in which case you can only control the expected value.
If you assign a probability pi for each of your possible choices, then the expected value will be the sum of these values, weighted by their probabilities:
EV = − 2p−2 − p−1 + p1 + 2p2 = 1.2
As additional constraints you have to require that each of these probabilities is non-negative, and that the above four add up to a value less than 1, with the remainder taken by the fifth probability p0.
there are many possible assignments which satisfy these requirements, and any one will do what you asked for. Which of them are reasonable for your application depends on what that application does.
You can use a PRNG which generates variables uniformly distributed in the range [0,1), and then map these to the cases you described by taking the cumulative sums of the probabilities as cut points.