Meaning of timekeeper and oriented - geography

What is the meaning of oriented and timekeeper in below sentence.
From google: Timekeeper means one who records the time in sports. and
oriented means positioned or faces towards
but could not understand it in below context. Can you explain it in simple speech?
Our global time system is oriented to the Sun. The relationship between sun and earth is considered as the best time keeper in the world.
Source of information is below.

What they are saying is We base our time as a society (when to go to work, eat, sleep) on the cycle of the sun. sunrise, noon, sunset.

Louis Essen built the first Cesium clock in 1955. That's an atomic clock so accurate that it doesn't lose time.

Related

Combining multiple Routing tasks of Google OR-Tools

I want to solve a VRP with some constraints. Let's say I got a network of nodes (= electric omnibus network) and each node represents a busline. Each node has attributes like starting and end time and energy consumption.
My idea is to use the code for a VRPTW and add the constraint of the energy consumption as capacity of a node .
This way I could manage that after a bus does a specific line, it only drives to the lines that are possible to fulfill concerning the time window (means: if he can or can not reach f.ex. line 30 before its starting time) and concerning the enery capacity (every bus has a battery and the energy consumption must not bring it below a certain point x. Point x means the bus can drive the line and still have enough energy left to reach the closest loading facility).
I tried to add the capacity constraint with the AddDimension() functionality, but it doesn't work out the way I want it to.
My question is: is it even possible to realize my idea with the current state of OR-Tools and if yes, how is it done best?
I do it with Python.

Network Traffic, MMO Tower Defense

I am programming a MMO Tower Defense game (Client Server architecture). Because of cheating protection, the server needs to have the logic. But I have real design problem. When lets say 10 People fight deathmatch against each other, every tower shoot needs to be calcuted and send over to the players. When many towers are build. (Like 10 Players * 10 Tower = 100 Tower ) the traffic is very high. (One player causes many messages per second) How can I solve this problem?
Server is written in Java ( Smartfox 2x)
Client is written in C# ( Unity 3d)
Thanks in advance.
Use strong simplified Units and combat rules on the server. Just program your damage algorithm by affecting continuous damage over time if someone is stepping into the "Damage area" around the tower.
U can use tow dimensional battlefields on the server where all units and towers and damage areas are just circles or oints and circles. U can also use several rings of damage with damage per second is lower in the outter circles and higher at the inner circles around the defense tower.
U can use queues for managing single target damage / multi target damage and area damage. Just the first in the queue is affecting by the damage area if single shot, just the first, second and third is affected by multi shot etc.
If a tower shoot a bullet every 5 seconds and does 40 damage per bullet u just calculate on the server by affecting a damage of 8 per second for every target standing in the "Damage area" or "damage circle".
On the client u can all the things like flying bullets and hitting and splish and splash and fireworks and so on.
But on the server do all the stuff by entering damage areas and then get damage per second. (sorry for bad english)
Because of cheating protection, the server needs to have the logic.
This is a false assumption. The server needs to have sufficient logic to run game logic and make sure that all players' actions make sense, but it does not need to send back the results. The clients can all run that same logic in parallel and figure out the results on their own.

What is the theory behind active noise cancellation?

In a previous question, I had asked Why can't I simply negate the source time domain amplitude values to produce a destructive noise signal?
One of the posters said that while simply producing a inverses polarity (negated) signal will work in theory, in practice it is not possible
So I am asking, what is the fundamental approach (in a sort of semi technical way) to active noise cancellation?
Secondly, why are most literature on this topic in frequency domain?
It's rather simple.
By the time you send your inverted signal, the noise has already been heard.
You need to look at what frequencies are being generated, and then produce the appropriate inverted signals of those to cancel them out.
Noise cancellation is prediction. Your algorithm has to predict what the sound of the noise will be at some time in the future (that time given by the system and audio time latencies), and then predict what signal will produce the opposite sound at that same point in the future (which your system will distort and delay, so you have to figure in the opposite distortion and delay).
You might be able to use several successive FFTs to determine which frequencies in the noise are not changing, and assume or calculate some probability that they will continue for a short time into the future.
If you know the frequency response curve of the speaker, you might be able to figure out the frequency amplitudes of a signal needed to match some predicted noise spectrum. The phase angle of a sinusoid will change with time. If you know the time delay of your output signal, you might be able to calculate the phase of a sinusoid at some point in the future. If you have a predicted phase of a particular frequency of noise at some time and location, you can add π to that phase angle to estimate the noice-cancelling signal.
If you don't know the frequency response and delay of your system, then you won't know what frequencies, amplitudes or phases of signal to create for cancellation. You might well end up amplifying the noise instead of cancelling it.
It seems that what’s missing is the propagation delay required to intercept and negate a signal. The KISS rule will eventually prove this true. The FFT is a complex calculation and each N iteration will introduce resulting error due to the time required to process the signal. To cancel a sound wave it will need to be intercepted in advance, processed and inverted. Then the time constant of the transducer must. E considered. My experience is that a microphone near the source of “noise” connected by wire and amplification device and transducer near the location where It is to be cancelled.
edit: typo
The basic idea of ANC is to find repetitive sound and play the opposite of it. If the repetitive sound continue to play we'll be able to cancel it. That goes in direct contradiction to to the other answers, but I'll clarify.
Playing the opposite sound means playing it again with a precise power and delay, possibly inverting the waveform. The delay itself varies for each frequency. For example, for a 20Hz sound we have to replay the inverted sound on a precise multiple of 1/20 = 0.05s. For 23Hz, for example, the delay has to be a multiple of 1/23 ~= 0.04347s.
Since any waveform can be produced by sum of sinusoidal, one way of doing it would be to only worry about the N biggest sinusoids, measured in power (square of the amplitudes). For finding the sinusoidal's frequencies and power we use the Fourier Transform, typically with the FFT algorithm.
If we take, for example N=8, it means we are trying to eliminate the 8 most powerfull wave components. For each of them we store:
wave's amplitude
wave's offset, taking the computer's clock as a base.
than we constantly play 8 sinusoids, each on the correct power and with the correct delay. The hard part is what happens next. We need to keep listening to adapt, but now we are listening to the environment sound + our own sound. This algorithm is harder to implement, but conceptually is easier, and one could easily figure out how to do it by himself.
So, contrary to what the other answers say, managing the time delay is critical. Is not possible to create an ANC system without doing it. If you only care about the frequency domain, the only thing you could possibly do is filter those frequencies. On an ANC system this makes not sense.

How to fade out volume naturally?

I have experimented with a sigmoid and logarithmic fade out for volume over a period of about half a second to cushion pause and stop and prevent popping noises in my music applications.
However neither of these sound "natural". And by this I mean, they sound botched. Like an amateur engineer was in charge of the sound decks.
I know the ear is logarithmic when it comes to volumes, or at least, twice as much power does not mean twice as loud. Is there a magic formula for volume fading? Thanks.
I spent many of my younger years mixing music recordings, live concerts and being a DJ for my school's radio station and the one thing I can tell you is that where you fade is also important.
Fading in on an intro or out during the end of a song sounds pretty natural as long as there are no vocals, but some of these computerized radio stations will fade ANYWHERE in a song to make the next commercial break ... I don't think there's a way to make that sound good.
In any case, I'll also answer the question you asked ... the logarithmic attenuation used for adjusting audio levels is generally referred to as "audio taper". Here's an excellent article that describes the physiology of human hearing in relation to the electronics we now use for our entertainment. See: http://tangentsoft.net/audio/atten.html.
You'll want to make sure that the end of the fade out is at a "zero crossing" in the waveform.
Half a second is pretty fast. You might just want to extend the amount of time, unless it must be that fast. Generally 2 or 3 seconds is more natural.
More on timing, it should really be with the beat rate of the music, and end at a natural point in the rhythm. Try getting the BPM of the song (this can be calculated roughly), and fading out over an interval equal to a whole or half note in that timing.
You might also try slowing down the playback speed while you're fading out. This will give a more natural vinyl record or magnetic tape sounding stop/pause. Linearly reduce playback speed while logarithmically reducing volume over the period of 1 second.
If you're just looking to get a clean sound sound when pausing or stopping playback then there's no need to fade at all - just find a zero-crossing point and stop there (or more realistically just fill the rest of that final buffer with silence). Fading out when the user expects the sound to stop immediately will sound unnatural, as you've noticed, because the result is decoupled from the action.
The reason for stopping at a zero-crossing point is that zero is the steady state value while the audio is stopped, so the transition between the two states is seamless. If you stop playback when the last sample's amplitude is large then you are effectively introducing transients into the audio from the point of view of the audio hardware when it reconstructs the analogue signal, which will be audible as pops and/or clicks.
Another approach is to fade to zero very fast (~< 10mS), which effectively achieves the same thing as the zero-crossing technique.

Dealing with Latency in Networked Games

I'm thinking about making a networked game. I'm a little new to this, and have already run into a lot of issues trying to put together a good plan for dead reckoning and network latency, so I'd love to see some good literature on the topic. I'll describe the methods I've considered.
Originally, I just sent the player's input to the server, simulated there, and broadcast changes in the game state to all players. This made cheating difficult, but under high latency things were a little difficult to control, since you dont see the results of your own actions immediately.
This GamaSutra article has a solution that saves bandwidth and makes local input appear smooth by simulating on the client as well, but it seems to throw cheat-proofing out the window. Also, I'm not sure what to do when players start manipulating the environment, pushing rocks and the like. These previously neutral objects would temporarily become objects the client needs to send PDUs about, or perhaps multiple players do at once. Whose PDUs would win? When would the objects stop being doubly tracked by each player (to compare with the dead reckoned version)? Heaven forbid two players engage in a sumo match (e.g. start pushing each other).
This gamedev.net bit shows the gamasutra solution as inadequate, but describes a different method that doesn't really fix my collaborative boulder-pushing example. Most other things I've found are specific to shooters. I'd love to see something more geared toward games that play like SNES Zelda, but with a little more physics / momentum involved.
Note: I'm not asking about physics simulation here -- other libraries have that covered. Just strategies for making games smooth and reactive despite network latency.
Check out how Valve does it in the Source Engine: http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
If it's for a first person shooter you'll probably have to delve into some of the topics they mention such as: prediction, compensation, and interpolation.
I find this network physics blog post by Glenn Fiedler, and even more so the response/discussion below it, awesome. It is quite lengthy, but worth-while.
In summary
Server cannot keep up with reiterating simulation whenever client input is received in a modern game physics simulation (i.e. vehicles or rigid body dynamics). Therefore the server orders all clients latency+jitter (time) ahead of server so that all incomming packets come in JIT before the server needs 'em.
He also gives an outline of how to handle the type of ownership you are asking for. The slides he showed on GDC are awesome!
On cheating
Mr Fiedler himself (and others) state that this algorithm suffers from not being very cheat-proof. This is not true. This algorithm is no less easy or hard to exploit than traditional client/server prediction (see article regarding traditional client/server prediction in #CD Sanchez' answer).
To be absolutely clear: the server is not easier to cheat simply because it receives network physical positioning just in time (rather than x milliseconds late as in traditional prediction). The clients are not affected at all, since they all receive the positional information of their opponents with the exact same latency as in traditional prediction.
No matter which algorithm you pick, you may want to add cheat-protection if you're releasing a major title. If you are, I suggest adding encryption against stooge bots (for instance an XOR stream cipher where the "keystream is generated by a pseudo-random number generator") and simple sanity checks against cracks. Some developers also implement algorithms to check that the binaries are intact (to reduce risk of cracking) or to ensure that the user isn't running a debugger (to reduce risk of a crack being developed), but those are more debatable.
If you're just making a smaller indie game, that may only be played by some few thousand players, don't bother implementing any anti-cheat algorithms until 1) you need them; or 2) the user base grows.
we have implemented a multiplayer snake game based on a mandatory server and remote players that make predictions. Every 150ms (in most cases) the server sends back a message containing all the consolidated movements sent by each remote player. If remote client movements arrive late to the server, he discards them. The client the will replay last movement.
Check out Networking education topics at the XNA Creator's Club website. It delves into topics such as network architecture (peer to peer or client/server), Network Prediction, and a few other things (in the context of XNA of course). This may help you find the answers you're looking for.
http://creators.xna.com/education/catalog/?contenttype=0&devarea=19&sort=1
You could try imposing latency to all your clients, depending on the average latency in the area. That way the client can try to work around the latency issues and it will feel similar for most players.
I'm of course not suggesting that you force a 500ms delay on everyone, but people with 50ms can be fine with 150 (extra 100ms added) in order for the gameplay to appear smoother.
In a nutshell; if you have 3 players:
John: 30ms
Paul: 150ms
Amy: 80ms
After calculations, instead of sending the data back to the clients all at the same time, you account for their latency and start sending to Paul and Amy before John, for example.
But this approach is not viable in extreme latency situations where dialup connections or wireless users could really mess it up for everybody. But it's an idea.

Resources