I'm currently trying to calculate the optimal window size. I got these variables:
Propagation delay: 10 ms
Bit speed: 100 kbps
Frame size: 20 bytes
When the propagation delay is 10 ms we get a limit when the window size is 13 and when the propagation delay is 20 ms we get a limit when we have a window size of 24.
Is there any formula to calculate the maximum window size?
The formula to your Question is:
(Bitspeed*2*tp)/buffer*8 = windowsize
Where:
Bitspeed = 100 in your case
2*tp = RTT (The time it takes to send and return a package), which in your case is 20
And buffer = 20, 20*8 to get the bitsize
Windowsize = the thing you want calculated
Hope I was helpful!
Bandwidth times delay. It's called the bandwidth-delay product.
Related
I'm working on a scenario where I have to generate some numbers at a rate of 10, use cumsum to sequence them, and then remove anything with a value over 12 (this represents the timings of visitors to a website):
Visits = rexp(4000, rate = 10)
Sequenced = cumsum(Visits)
Sequenced <- Sequenced[Sequenced <= 12]
From here I need to verify that the generated "visits" follows a Poisson process with a rate of 10, but I'm not sure I'm doing this right.
TheMean = mean(Sequenced)
HourlyRate1 = TheMean/12 # divided by 12 as data contains up to 12 hours
This does not generate an answer of (or near) 10 (I thought it would based on the rate parameter of the rexp function).
I am new to this, so I believe I have misunderstood something along the way, but I'm not sure what. Can somebody please point me in the right direction, where using the data generated in the first code segment above, I need to "verify the visits follow a Poisson Process with rate λ equals 10".
You are measuring the wrong thing.
Since Sequenced (the times of visits) cannot exceed 12, its mean is likely to be about 6 and, if that is the case, it simply confirms that you applied that limit of 12
What does have a Poisson distribution is the number of terms in Sequenced: this is expected to be 12×10=120 though with a variance of 120 and so a standard deviation of 10.95. You could look at that, or divide that by 12 (in which case the expected value is 10 and standard deviation about 0.9, but that is not Poisson distributed and has the possibility of non-integer values), with the R code
NumberOfVisits <- length(Sequenced)
VisitsPerUnitTime <- NumberOfVisits / 12
When I use Aws dynamodb, I didn't know how the service calculate the cost with auto scale in each hour.
example:
If first 12 "minutes" the write unit is 10 WCU, after that write unit is
20 WCU in one hour.
What is actual write unit should I pay? (10, 20 or 10 * 0.2 + 0.8 * 20)
There's specific examples of this at https://aws.amazon.com/dynamodb/pricing, and in short you will pay for the maximum capacity units in a given hour when scaling up, and the target capacity units when scaling down.
I have a hardware true random number generator. It has very high performance. When I run the output via ENT I get the following report:
Total: 1073741824 1.000000
Entropy = 8.000000 bits per byte.
Optimum compression would reduce the size of this 1073741824 byte file
by 0 percent.
Chi square distribution for 1073741824 samples is 247.87, and randomly
would exceed this value 61.38 percent of the times.
Arithmetic mean value of data bytes is 127.4957 (127.5 = random).
Monte Carlo value for Pi is 3.141666379 (error 0.00 percent). Serial
correlation coefficient is 0.000056 (totally uncorrelated = 0.0).
In R I have been taking the random bytes and using them in sample as a probability vector.
This takes the form of sample(x, size, replace=FALSE, prob=myprobvector) which works fine until I test the entropy of it and find out it is not much better than the Mersenne-Twister, which gives 99.9% entropy efficiency. In my case I am using a base of 75 choices so Log2(75) = 6.22881869. The entropy of MT approach is 6.223578221 and with the vector it is 6.223578468. I believe It is still using MT and just using the probability vector to add weight and move things around. How can I get it to use just the values off the hardware? (Assume they are being read like a file.)
I have found below calculation from http://www.gridsouth.com/services/colocation/basics/bandwidth
I have no idea how they came up with final number 1.395 mbps. Can you please help me with the formula used in below example?
If your network provider bills you on average usage, let's say they sample your MBPS usage 100 times in one month (typically it would be more like every 5 minutes) and of those samples your network usage was measured as follows: 20 times: .1 MBPS, 30 times .1.5 MBPS, 30 times 1.8 MBPS, 15 times 1.9 MBPS and 5 times at 2 MBPS. If you average all these samples you would be billed at whatever the price is in your contract for 1.395 MBPS bandwidth.
(20*0.1 + 30*1.5 + ... + 5*2)/100 = 1.395
Looks like the standard definition for a (weighted) average...
I am trying to generate a series of wait times for a Markov chain where the wait times are exponentially distributed numbers with rate equal to one. However, I don't know the number of transitions of the process, rather the total time spent in the process.
So, for example:
t <- rexp(100,1)
tt <- cumsum(c(0,t))
t is a vector of the successive and independent waiting times and tt is a vector of the actual transition time starting from 0.
Again, the problem is I don't know the length of t (i.e. the number of transitions), rather how much total waiting time will elapse (i.e. the floor of last entry in tt).
What is an efficient way to generate this in R?
The Wikipedia entry for Poisson process has everything you need. The number of arrivals in the interval has a Poisson distribution, and once you know how many arrivals there are, the arrival times are uniformly distributed within the interval. Say, for instance, your interval is of length 15.
N <- rpois(1, lambda = 15)
arrives <- sort(runif(N, max = 15))
waits <- c(arrives[1], diff(arrives))
Here, arrives corresponds to your tt and waits corresponds to your t (by the way, it's not a good idea to name a vector t since t is reserved for the transpose function). Of course, the last entry of waits has been truncated, but you mentioned only knowing the floor of the last entry of tt, anyway. If he's really needed you could replace him with an independent exponential (bigger than waits[N]), if you like.
If I got this right: you want to know how many transitions it'll take to fill your time interval. Since the transitions are random and unknown, there's no way to predict for a given sample. Here's how to find the answer:
tfoo<-rexp(100,1)
max(which(cumsum(tfoo)<=10))
[1] 10
tfoo<-rexp(100,1) # do another trial
max(which(cumsum(tfoo)<=10))
[1] 14
Now, if you expect to need to draw some huge sample, e.g. rexp(1e10,1), then maybe you should draw in 'chunks.' Draw 1e9 samples and see if sum(tfoo) exceeds your time threshold. If so, dig thru the cumsum . If not, draw another 1e9 samples, and so on.