How many game updates per second? [closed] - frame-rate

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What update rate should I run my fixed-rate game logic at?
I've used 60 updates per second in the past, but that's hard because it's not an even number of updates per second (16.666666). My current games uses 100, but that seems like overkill for most things.

None of the above. For the smoothest gameplay possible, your game should be time-based, not frame-locked. Frame-locking works for simple games where you can tweak the logic and lock down the framerate. It doesn't do so well with modern 3D titles where the framerate jumps all over the board and the screen may not be VSynced.
All you need to do is figure out how fast an object should be going (i.e. virtual units per second), compute the amount of time since the last frame, scale the number of virtual units to match the amount of time that has passed, then add those values to your object's position. Voila! Time-based movement.

I used to maintain a Quake3 mod and this was a constant source of user-questions.
Q3 uses 20 'ticks per second' by default - the graphics subsystem interpolates so you get smooth motion on the screen. I initially thought this was way low, but it turns out to be fine, and there really aren't many games at all with faster action than q3
I'd personally go with the "good enough for john carmack, good enough for me"

I like 50 for fixed rate pc games. I can't really tell the difference between 50 and 60 (and if you are making a game that can/cares you should probably be at 100).
you'll notice the question is 'fixed-rate game logic' and not 'draw loop'. For clarity, the code will look something like:
while(1)
{
while(CurrentTime() < lastUpdate + TICK_LENGTH)
{
UpdateGame();
lastUpdate += TICK_LENGTH;
}
Draw();
}
The question is what should TICK_LENGTH be?

Bear in mind that unless your code is measured down to the cycle, not each game loop will take the same number of milliseconds to complete - so 16.6666 being irrational is not an issue really as you will need to time and compensate anyway. Besides it's not 16.6666 updates per second, but the average number of milliseconds your game loop should be targeting.

Such variables are generally best found via the guess and check strategy.
Implement your game logic in such a way that is refresh agnostic (Say for instance, exposing the ms/update as a variable, and using it in any calculations), then play around with the refresh until it works, and then keep it there.
As a short term solution, if you want an even update rate but don't care about the evenness of the updates per second, 15ms is close to 60 updates/sec. While if you are about both, your closest options is 20ms or 50 updates/sec is probably the closest you are going to get.
In either case, I would simply treat time as a double (Or a long with high-resolution), and provide the rate to your game as a variable, rather then hard coding them.

The ideal is to run at the same refresh-rate as the monitor. That way your visuals and the game updates don't go in and out of phase with each other. The fact that each frame doesn't last an integral number of milliseconds shouldn't matter to you; why is that a problem?

I usually use 30 or 33. It's often enough for the user to feel the flow and rare enough not to hog the CPU too much.

Normally I don't limit the FPS of the game, instead I change all my logic to take the time elapsed from last frame as input.
As far as fixed-rate goes, unless you need a high rate for any reason, you should use something like 25/30. That should be enough rate, and will be making your game a little lighter on CPU usage.

Your engine should both "tick" (update) and draw at 60fps with vertical sync (vsync). This refresh rate is enough to provide:
low input lag for a feeling of responsiveness,
and smooth motion even when the player and scene are moving rapidly.
Both the game physics and the renderer should be able to drop frames if they need to, but optimize your game to run as close to this 60hz standard as possible. Also, some subsystems like AI can tick closer to 10-20fps, and make sure your physics are interpolated on a frame-to-frame time delta, like this: http://gafferongames.com/game-physics/fix-your-timestep/

Related

Dynamics of burn down charts in Scrum [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have a question about how dynamic the Y axis of a burn down chart in Scrum should be. We plot the chart in the beginning of a sprint having the total number of estimated story points on the Y axis, and the planned days on the X axis.
Usually, during the sprint, we have a fair amount of:
unplanned tasks / stories;
tasks / stories that take longer than estimated (re-estimated by the person checking out the task);
Questions:
should the story points of the unplanned tasks be plotted into the chart? if so, extend the y axis as well and redraw the expected curve? or just plot the points and have an actual curve with points maybe higher than the starting point?
should the re estimations be counted when plotting the chart, or just the initial estimations? same questions as for the first question...
I would prefer to ignore the unplanned items and the re-estimations as they will show up in the actual focus factor calculation anyway. Is it wrong?
Try using a burn UP chart.
http://www.nearinfinity.com/blogs/lee_richardson/forget_burndown_use_burnup_charts.html
Also, I would do everything in your power to stop the unplanned items. They are typically very caustic. If it's code debt cashing in, try to address it a little bit at a time in every sprint. If it's a consistent amount of time every sprint, perhaps create a story at the start of the sprint for "unplanned tasks" or "production fixes" or something like that.
In the end, what really matters is that the burndown chart allows you to track progress (or lack thereof) toward the commitment. So as long as you're achieving that, you're good to go. Which means, really any of these solutions would work - just pick one and go with it.
We usually do option number 2 at work, adding the new story points to the actual line so that we "see" that the line goes up, reflecting new learnings and additions. But since opinions vary, I guess your team will have to agree on what suites them best, since these burndown charts are for the team to show progress throughout the sprint.
What you count or not count should depend on what you are using your burndown for.
When I use a burndown it is most often to answer the question "Are we on track to completing our commitment of this sprint - or do we need to take external action?".
In that case, the thing that is most relevant to track is the "anticipated total amount of work left to finish commitment"; whether that amount was planned/unplanned or whether the amount was originally estimated to another amount is uninteresting in this context. It is still amount of work that need to be done - so it all counts.
So, count all remaining work. If the graph points towards the goal, keep working. If it points drastically different - take external actions (e g renegotiate sprint commitment w PO).
Now, you might be trying to answer another question (e g "how good are we at planning" or "are we having scope creep during sprint"), and in that case you would count in a different way.
A burndown chart is useful for tracking progress towards the team's commitment. In this case, it sounds like your team is struggling with two things that don't relate to the burndown chart:
1. Unplanned work
2. Poor estimates.
The key here is to focus on those problems. No matter what you do with the burndown chart, if you're adding unplanned work and your estimates are poor... you'll never derive any value from the burndown chart.
I'd recommend a couple of things:
1. Switch to tracking hours for Tasks... not points. Hours are tangible for the team... they mean something. Points are typically burned down at the release level.
2. Try shortening the length of your sprints. It's easier to achieve a smaller goal.
3. Ensure that task estimates are no longer then 8 hours. In fact, I'd shorten that to probably 4 hours. Estimating tasks that take longer than a single day encourages the wrong behavior for the team.
4. Ensure that you're spending enough time in Sprint Planning that that team can make a commitment. An effective sprint planning meeeting is the first step towards an effective sprint.

Flex Profiling (Flex Builder): comparing two results

I am trying to use Flex Profiler to improve the application performance (loading time, etc). I have seen the profiler results for the current desgn. I want to compare these results with a new design for the same set of data. Is there some direct way to do it? I don't know any way to save the current profiling results in history and compare it later with the results of a new design.
Otherwise I have to do it manually, write the two results in a notepad and then compare it.
Thanks in advance.
Your stated goal is to improve aspects of the application performance (loading time, etc.) I have similar issues in other languages (C#, C++, C, etc.) I suggest that you focus not so much on the timing measurements that the Flex profiler gives you, but rather use it to extract a small number of samples of the call stack while it is being slow. Don't deal in summaries, but rather examine those stack samples closely. This may bend your mind a little bit, because it will not give you particularly precise time measurements. What it will tell you is which lines of code you need to focus on to get your speedup, and it will give you a very rough idea of how much speedup you can expect. To get the exact amount of speedup, you can time it afterward. (I just use a stopwatch. If I'm getting the load time down from 2 minutes to 10 seconds, timing it is not a high-tech problem.)
(If you are wondering how/why this works, it works because the reason for the program being slower than it's going to be is that it's requesting work to be done, mostly by method calls, that you are going to avoid executing so much. For the amount of time being spent in those method calls, they are sitting exposed on the stack, where you can easily see them. For example, if there is a line of code that is costing you 60% of the time, and you take 5 stack samples, it will appear on 3 samples, plus or minus 1, roughly, regardless of whether it is executed once or a million times. So any such line that shows up on multiple stacks is a possible target for optimization, and targets for optimization will appear on multiple stack samples if you take enough.
The hard part about this is learning not to be distracted by all the profiling results that are irrelevant. Milliseconds, average or total, for methods, are irrelevant. Invocation counts are irrelevant. "Self time" is irrelevant. The call graph is irrelevant. Some packages worry about recursion - it's irrelevant. CPU-bound vs. I/O bound - irrelevant. What is relevant is the fraction of stack samples that individual lines of code appear on.)
ADDED: If you do this, you'll notice a "magnification effect". Suppose you have two independent performance problems, A and B, where A costs 50% and B costs 25%. If you fix A, total time drops by 50%, so now B takes 50% of the remaining time and is easier to find. On the other hand, if you happen to fix B first, time drops by 25%, so A is magnified to 67%. Any problem you fix makes the others appear bigger, so you can keep going until you just can't squeeze it any more.

Scrum: Unfinished products and sprint velocity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Let’s say product X is worth 10 story points. Development starts in sprint Y, but is not completed in time. What do you with the story points when calculating sprint Y’s velocity?
Would you:
a. Allocate 0 story points for sprint Y and 10 points for the sprint it is eventually completed in;
b. Determine the story points for the remaining work (let’s say 3) and allocate the difference to sprint Y (7 in our example); or
c. Something else?
Thanks in advance!
Depends on whether you care about your "instantaneous" or "average" velocity. Personally, I wouldn't make it more complicated than necessary and just add it into the sprint where it was completed. Calculate your average velocity by looking at the average number of points completed per sprint over the last 3, 6, and 12 months. Hopefully, these will eventually converge and you'll have a good idea of how much you can get done in one sprint.
Allocate 0 points for sprint Y and 10 points when the story is eventually completed. Either the story is done or it is not done. There is no middle ground. You want to avoid the 50% done or your teams may implement many stories half way and none completely.
It is perfectly okay not to finish a story during a sprint and completing it in the next sprint. But, you should not present this story to the product owner during the sprint review.
If you have enough stories for a given sprint, it won't matter if the story is completed this sprint or the next. Things will average.
It is also important to explain to the team and to the stakeholders that the velocity helps estimate when the release will take place and is not a measure of the team performance.
The team should be judged on the final result they produce, not when those results are produced.
Combined with a well prioritized backlog, you will create good quality software that means your customers needs.
That's one of the ideas of the sprint, the "completeness" is binary, either done or not, over time the team(s) will have better estimation and this question will loose relevance
BUT...
The next question is how do you caculate your commitment for sprint after Y. If your past weather shows you have a an average velocity of 20pts. If you carry the story over then you carry over 10pts. However if you think there is only 3pts left of the story: Do you
A) Take on another 17pt to fill your estimated capacity of 20pts
B) Only take on 10pt more as the story carried over was originally estimated at 10pts
We got into a mess trying to do A. What do other people think ?
[Update]
I posted a question about this:
Work out sprint capacity when carrying over story points in scrum
The situation here is not satisfactory, but at the moment we estimate the work remaining for unfinished stories. If it is only around 20% or less we leave the story and the points in the sprint they are. If more than that we ask the PO if we should finish the story, if yes then we move it to the new sprint.
However this is not satisfactory for several reasons.
First big or risky stories should have been started at the beginning of the sprint so the non-completion could be avoided.
Second we get inaccurate (but probably smoother) velocity estimates which are less useful going forward
Third it isn't strict, and the team is like a 2 year old child, show it a slight weakness and it wants to exploit it.
Finally, strictness is being tightened as time progresses, the teams are finding their feet to an extent and learning the best ways of dealing with stuff. We already have massive variation in velocity - most teams have a comment on each and every sprint about what factors (holiday, illness etc) affected each sprint... totally bad :(

How to fade out volume naturally?

I have experimented with a sigmoid and logarithmic fade out for volume over a period of about half a second to cushion pause and stop and prevent popping noises in my music applications.
However neither of these sound "natural". And by this I mean, they sound botched. Like an amateur engineer was in charge of the sound decks.
I know the ear is logarithmic when it comes to volumes, or at least, twice as much power does not mean twice as loud. Is there a magic formula for volume fading? Thanks.
I spent many of my younger years mixing music recordings, live concerts and being a DJ for my school's radio station and the one thing I can tell you is that where you fade is also important.
Fading in on an intro or out during the end of a song sounds pretty natural as long as there are no vocals, but some of these computerized radio stations will fade ANYWHERE in a song to make the next commercial break ... I don't think there's a way to make that sound good.
In any case, I'll also answer the question you asked ... the logarithmic attenuation used for adjusting audio levels is generally referred to as "audio taper". Here's an excellent article that describes the physiology of human hearing in relation to the electronics we now use for our entertainment. See: http://tangentsoft.net/audio/atten.html.
You'll want to make sure that the end of the fade out is at a "zero crossing" in the waveform.
Half a second is pretty fast. You might just want to extend the amount of time, unless it must be that fast. Generally 2 or 3 seconds is more natural.
More on timing, it should really be with the beat rate of the music, and end at a natural point in the rhythm. Try getting the BPM of the song (this can be calculated roughly), and fading out over an interval equal to a whole or half note in that timing.
You might also try slowing down the playback speed while you're fading out. This will give a more natural vinyl record or magnetic tape sounding stop/pause. Linearly reduce playback speed while logarithmically reducing volume over the period of 1 second.
If you're just looking to get a clean sound sound when pausing or stopping playback then there's no need to fade at all - just find a zero-crossing point and stop there (or more realistically just fill the rest of that final buffer with silence). Fading out when the user expects the sound to stop immediately will sound unnatural, as you've noticed, because the result is decoupled from the action.
The reason for stopping at a zero-crossing point is that zero is the steady state value while the audio is stopped, so the transition between the two states is seamless. If you stop playback when the last sample's amplitude is large then you are effectively introducing transients into the audio from the point of view of the audio hardware when it reconstructs the analogue signal, which will be audible as pops and/or clicks.
Another approach is to fade to zero very fast (~< 10mS), which effectively achieves the same thing as the zero-crossing technique.

Fixed vs. variable frame rates in games: what is best, and when?

After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes).
Which method works best for specific situations? Please consider:
Catering to different system specifications;
Ease of development/maintenance;
Ease of porting;
Final performance.
I lean towards a variable framerate model, but internally some systems are ticked on a fixed timestep. This is quite easy to do by using a time accumulator. Physics is one system which is best run on a fixed timestep, and ticked multiple times per frame if necessary to avoid a loss in stability and keep the simulation smooth.
A bit of code to demonstrate the use of an accumulator:
const float STEP = 60.f / 1000.f;
float accumulator = 0.f;
void Update(float delta)
{
accumulator += delta;
while(accumulator > STEP)
{
Simulate(STEP);
accumulator -= STEP;
}
}
This is not perfect by any means but presents the basic idea - there are many ways to improve on this model. Obviously there are issues to be sorted out when the input framerate is obscenely slow. However, the big advantage is that no matter how fast or slow the delta is, the simulation is moving at a smooth rate in "player time" - which is where any problems will be perceived by the user.
Generally I don't get into the graphics & audio side of things, but I don't think they are affected as much as Physics, input and network code.
It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance.
At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast)
Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too)
The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok)
Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game)
If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates.
If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models.
One option that I, as a user, would like to see more often is dynamically changing the level of detail (in the broad sense, not just the technical sense) when framerates vary outside of a certian envelope. If you are rendering at 5FPS, then turn off bump-mapping. If you are rendering at 90FPS, increase the bells and whistles a bit, and give the user some prettier images to waste their CPU and GPU with.
If done right, the user should get the best experince out of the game without having to go into the settings screen and tweak themselves, and you should have to worry less, as a level designer, about keeping the polygon count the same across difference scenes.
Of course, I say this as a user of games, and not a serious one at that -- I've never attempted to write a nontrivial game.
The main problem I've encountered with variable length frame times is floating point precision, and variable frame times can surprise you in how they bite you.
If, for example, you're adding the frame time * velocity to a position, and frame time gets very small, and position is largish, your objects can slow down or stop moving because all your delta was lost due to precision. You can compensate for this using a separate error accumulator, but it's a pain.
Having fixed (or at least a lower bound on frame length) frame times allows you to control how much FP error you need to take into account.
My experience is fairly limited to somewhat simple games (developed with SDL and C++) but I have found that it is quite easy just to implement a static frame rate. Are you working with 2d or 3d games? I would assume that more complex 3d environments would benefit more from a variable frame rate and that the difficulty would be greater.

Resources