Time components of propagation - networking

I had a homework assignment for school and it dealt with calculating the propagation time of information sent from various places around the world.
The next question asked me to find the difference from the download time from the various places and the estimated propagation time, both in milliseconds. The next part is what confused me, it asked " what time components must this include?"
I dont even know what time components are and have never heard that term in my 4 years of dealing with computers and networks.
Thanks for you help in advance!( The question is posted below)
Compute the difference between the measured download time and estimated propagation time and put the result in the "difference" column. What time components must this value include?

I understand the question to mean "how do you explain the difference between the measured download time and the estimated propagation time?"
So, for example, if you decided that the difference was due to packets having to be re-sent, due to a poor connection, then the "time components" would be time delays due to re-sending the packets.

Related

Averaging when events are slower than measurement time

I'm trying to come up with a better way of providing an instantaneous average when input signal is very slow. This seems like a math-y kinda question so if it should be over there let me know.
I have events that are measured as a pulse. Normally I can collect the pulses in a counter and then read the counter value at a fixed interval, say 1/4 second. I can then take the count value and divide by the number of seconds so n/0.25 and get a rate. I then apply a low pass filter to clean up the average and that works great normally.
What do I do when the events happen once every 1-60 seconds? The obvious choice is to wait until I have a sufficient number of counts and divide by total time. However, I need to provide the user with a reading every few seconds so waiting is not an option. I need some way to estimate the value.
I've thought of one solution that's kinda hard to explain. I was wondering if there was a standard way of doing this. I'm pretty sure I have to utilize a different kind of "data," the lack of an event. The goal is to estimate until enough time/events have passed to really calculate a rate and to transition from estimate to real rate seamlessly.

What is the theory behind active noise cancellation?

In a previous question, I had asked Why can't I simply negate the source time domain amplitude values to produce a destructive noise signal?
One of the posters said that while simply producing a inverses polarity (negated) signal will work in theory, in practice it is not possible
So I am asking, what is the fundamental approach (in a sort of semi technical way) to active noise cancellation?
Secondly, why are most literature on this topic in frequency domain?
It's rather simple.
By the time you send your inverted signal, the noise has already been heard.
You need to look at what frequencies are being generated, and then produce the appropriate inverted signals of those to cancel them out.
Noise cancellation is prediction. Your algorithm has to predict what the sound of the noise will be at some time in the future (that time given by the system and audio time latencies), and then predict what signal will produce the opposite sound at that same point in the future (which your system will distort and delay, so you have to figure in the opposite distortion and delay).
You might be able to use several successive FFTs to determine which frequencies in the noise are not changing, and assume or calculate some probability that they will continue for a short time into the future.
If you know the frequency response curve of the speaker, you might be able to figure out the frequency amplitudes of a signal needed to match some predicted noise spectrum. The phase angle of a sinusoid will change with time. If you know the time delay of your output signal, you might be able to calculate the phase of a sinusoid at some point in the future. If you have a predicted phase of a particular frequency of noise at some time and location, you can add π to that phase angle to estimate the noice-cancelling signal.
If you don't know the frequency response and delay of your system, then you won't know what frequencies, amplitudes or phases of signal to create for cancellation. You might well end up amplifying the noise instead of cancelling it.
It seems that what’s missing is the propagation delay required to intercept and negate a signal. The KISS rule will eventually prove this true. The FFT is a complex calculation and each N iteration will introduce resulting error due to the time required to process the signal. To cancel a sound wave it will need to be intercepted in advance, processed and inverted. Then the time constant of the transducer must. E considered. My experience is that a microphone near the source of “noise” connected by wire and amplification device and transducer near the location where It is to be cancelled.
edit: typo
The basic idea of ANC is to find repetitive sound and play the opposite of it. If the repetitive sound continue to play we'll be able to cancel it. That goes in direct contradiction to to the other answers, but I'll clarify.
Playing the opposite sound means playing it again with a precise power and delay, possibly inverting the waveform. The delay itself varies for each frequency. For example, for a 20Hz sound we have to replay the inverted sound on a precise multiple of 1/20 = 0.05s. For 23Hz, for example, the delay has to be a multiple of 1/23 ~= 0.04347s.
Since any waveform can be produced by sum of sinusoidal, one way of doing it would be to only worry about the N biggest sinusoids, measured in power (square of the amplitudes). For finding the sinusoidal's frequencies and power we use the Fourier Transform, typically with the FFT algorithm.
If we take, for example N=8, it means we are trying to eliminate the 8 most powerfull wave components. For each of them we store:
wave's amplitude
wave's offset, taking the computer's clock as a base.
than we constantly play 8 sinusoids, each on the correct power and with the correct delay. The hard part is what happens next. We need to keep listening to adapt, but now we are listening to the environment sound + our own sound. This algorithm is harder to implement, but conceptually is easier, and one could easily figure out how to do it by himself.
So, contrary to what the other answers say, managing the time delay is critical. Is not possible to create an ANC system without doing it. If you only care about the frequency domain, the only thing you could possibly do is filter those frequencies. On an ANC system this makes not sense.

What is cyclic redundancy check and how it works in simple terms (for-dummies style)?

I'm having trouble understanding the concept and workings of the ugly sounding term "cyclic redundancy check". I'm attending a college course on Computer Networks and I'm getting lost already.
The trouble is that my understanding of mathematics is very limited (studied maths a long time ago in school and forgot most of it) and I can't get for example what the hell a generator polynomial is, what polynomials have to do with CRC and to sum it up - all of that seems totally incomprehensible to me.
I read the wiki entry on CRC but it didn't help me since I'm no good at maths and all these symbols and math terms are like Chinese to me.
I understand that CRC is used for error detection when sending data on the network but from then on I'm lost.
Can anybody help me with explaining this concept in simple terms and possibly give an example?
During the last lecture the professor started drawing all these one's and zeroes, dividing and I don't know what and I was just staring and feeling stupid.
I'd be very grateful it anybody can help me understand!
If you want the answer to be very simple you need to accept some oversimplification, if you're willing to live with that, here it goes:
Data is transmitted over imperfect links - errors may occur on the way. Imagine you want to make sure the received information is the same as the transmitted one without wasting too much bandwidth, how would you do that?
You could transmit every piece of information twice and if on the receiving end you see that the first one is different to the second one you know an error has occurred and you need to request the data again - but this would be very wasteful, it would effectively cut your bandwidth in half.
Now, what if you could calculate some value that is much smaller than the data itself yet is dependent on it? So if the data changed along the way (due to error), the calculated value would no longer "match" the data and you would know an error has occurred. Is there such a calculation?
What about simple division and taking a remainder as this value?
Say I want to transmit an information/number 1,000. I divide it by chosen number - like 6 for instance ... that gives me 166 and a remainder of 4. I take the remainder as my check value which is much smaller than the information I'm actually transmitting so I'm not wasting too much bandwidth and I transmit 1,000 followed by 4. A receiver gets it, takes the number 1,000 divides it by 6 and if the remainder is 4 then it assumes that no error has occurred.
If an error had occurred and it would receive 998 instead of 1,000 due to error on the link - it would divide it by 6, get a remainder of 2 which does not match 4 and viola it knows an error has occurred. That is the basic principle of CRC.
Of course it is a little more complicated because it divides by a polynomial but the principle of using a remainder as a "short value representing the data" to check it for errors in the same way stands.
I hope this helps you to get your head around on what's going on ;)

Dynamics of burn down charts in Scrum [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have a question about how dynamic the Y axis of a burn down chart in Scrum should be. We plot the chart in the beginning of a sprint having the total number of estimated story points on the Y axis, and the planned days on the X axis.
Usually, during the sprint, we have a fair amount of:
unplanned tasks / stories;
tasks / stories that take longer than estimated (re-estimated by the person checking out the task);
Questions:
should the story points of the unplanned tasks be plotted into the chart? if so, extend the y axis as well and redraw the expected curve? or just plot the points and have an actual curve with points maybe higher than the starting point?
should the re estimations be counted when plotting the chart, or just the initial estimations? same questions as for the first question...
I would prefer to ignore the unplanned items and the re-estimations as they will show up in the actual focus factor calculation anyway. Is it wrong?
Try using a burn UP chart.
http://www.nearinfinity.com/blogs/lee_richardson/forget_burndown_use_burnup_charts.html
Also, I would do everything in your power to stop the unplanned items. They are typically very caustic. If it's code debt cashing in, try to address it a little bit at a time in every sprint. If it's a consistent amount of time every sprint, perhaps create a story at the start of the sprint for "unplanned tasks" or "production fixes" or something like that.
In the end, what really matters is that the burndown chart allows you to track progress (or lack thereof) toward the commitment. So as long as you're achieving that, you're good to go. Which means, really any of these solutions would work - just pick one and go with it.
We usually do option number 2 at work, adding the new story points to the actual line so that we "see" that the line goes up, reflecting new learnings and additions. But since opinions vary, I guess your team will have to agree on what suites them best, since these burndown charts are for the team to show progress throughout the sprint.
What you count or not count should depend on what you are using your burndown for.
When I use a burndown it is most often to answer the question "Are we on track to completing our commitment of this sprint - or do we need to take external action?".
In that case, the thing that is most relevant to track is the "anticipated total amount of work left to finish commitment"; whether that amount was planned/unplanned or whether the amount was originally estimated to another amount is uninteresting in this context. It is still amount of work that need to be done - so it all counts.
So, count all remaining work. If the graph points towards the goal, keep working. If it points drastically different - take external actions (e g renegotiate sprint commitment w PO).
Now, you might be trying to answer another question (e g "how good are we at planning" or "are we having scope creep during sprint"), and in that case you would count in a different way.
A burndown chart is useful for tracking progress towards the team's commitment. In this case, it sounds like your team is struggling with two things that don't relate to the burndown chart:
1. Unplanned work
2. Poor estimates.
The key here is to focus on those problems. No matter what you do with the burndown chart, if you're adding unplanned work and your estimates are poor... you'll never derive any value from the burndown chart.
I'd recommend a couple of things:
1. Switch to tracking hours for Tasks... not points. Hours are tangible for the team... they mean something. Points are typically burned down at the release level.
2. Try shortening the length of your sprints. It's easier to achieve a smaller goal.
3. Ensure that task estimates are no longer then 8 hours. In fact, I'd shorten that to probably 4 hours. Estimating tasks that take longer than a single day encourages the wrong behavior for the team.
4. Ensure that you're spending enough time in Sprint Planning that that team can make a commitment. An effective sprint planning meeeting is the first step towards an effective sprint.

Measuring time difference between networked devices

I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.

Resources