A program consists of 40% ALU (r-type) instructions, 20% stores, 20% memory reads and 10% jumps. 40% of the values read from memory are used in the next instruction.
Assuming this is a large program, what is the CPI?
answer=1.4 Can anyone explain this please?
Related
It is my first time programming a finite state machine for a boss fight in my game. I want the fight to feel dynamic, the evaluation is currently very simple and works like this:
Player within short range? > pick random within short range state
Player within medium range? > pick random within medium range state
My goal is to make this a bit more sophisticated, what I'd love to achieve is a value called StateLikelihood for each state that determines how likely a state can occur, and works like this:
StateLikelihood is 0% if player is outside range of state (or other
factors that can make it 0%)
StateLikelihood is reduced if state has already occurred recently (or other factors that reduces it)
Statelikelihood increases depending on various factors
Example:
Likelihood of state X is 70% if:
Player health is 60 (each healthpoint adds +1%)
Player is within medium range (Other ranges makes it 0%)
Player has attacked (adds +10% to likelihood)
State X occurred, State X likelihood is now 50% next evaluation
My questions is: Does anyone know a probability formula that could help me a achieve this, or is there a better approach than this? What trips me is that, sure this would be easy to do with a single state; but having multiple states it would be affecting the probability of all those other states being selected too.
First problem I see is that I cannot add flat likelihood percentage increases next evaluation, my idea is that the increases might have to be in relation to the max likelihood of the state.
If anyone has any recommendations or resources I'd be delighted.
I have a multi-threading application and when I run vtune-profiler on it, under the caller/callee tab, I see that the callee function's CPU Time: Total - Effective Time is larger than caller function's CPU Time: Total - Effective Time.
eg.
caller function - A
callee function - B (no one calls B but A)
Function
CPU time: Total
-
Effective Time
A
54%
B
57%
My understanding is that Cpu Time: Total is the sum of CPU time: self + time of all the callee's of that function. By that definition should not Cpu Time: Total of A be greater than B?
What am I missing here?
It might have happened that the function B is being called by some other function along with A so there must be this issue.
Intel VTune profiler works by sampling and numbers are less accurate for short run time. If your application runs for a very short duration you could consider using allow multiple runs in VTune or increasing the run time.
Also Intel VTune Profiler sometimes rounds off the numbers so it might not give ideal result but the difference is very small like 0.1% but in your question its 3% difference so this won't be the reason for it.
So, if i have a building that takes 2 days and 4 hours to construct e.g., and i'm going 40% faster than the iniciall time that would have take to build this building, how much time is inicial one, plus, if I'm going 45% faster, how much time would reduce ?
So assuming you work constantly without stopping, it takes 52 hours to build going 40% faster you can simply use a inverse proportion, where 140*52=x*100 where x is the time it takes to build at normal speed, which in this case x=72.8 hours. To check for other speed just replace the 140 with the new percentage, just remember to add 100 as you are going x% faster plus the initial speed.
A friend of yours proposes a game: if you correctly guess the amount of money in his wallet, the cash is yours; otherwise, you get nothing. You're allowed a single guess.
You believe there's a 50 percent chance your friend has $0 in his wallet, a 25 percent chance of $1, 24 percent of $100 and — excitingly — a one percent chance he has $1,000.
What dollar amount should you guess in order to maximize your expected winnings?
You can argue like this: suppose that you take a guess N times with N approaching infinity, and you must always guess the same amount. What would you earn if your constant guess was $1? [There is no point betting $0.] Well, 25% of the time you would win, so your earnings would be 0.25*N*($1) = $(0.25*N).
Likewise, if your guess was $100 you would win 24% of the time, so your earnings would be 0.24*N*($100) = $(24*N), and if your guess was $1000 you would win 1% of the time, yielding 0.01*N*($1000) = $(10*N).
So, the guess which maximizes the earnings (in the sense described) is $100.
I am running a small Cassandra cluster on Google Compute Engine. From our CPU graphs (as reported by collectd), I notice that a nontrivial amount of processor time is spent in NICE. How can I find out what process is consuming this? I've tried just start top and staring at it for a while, but the NICE cpu usage is a bit spikey (most of the time, NICE is at 0%; only on occasion will it spike up to 30-40%) so "sit and wait" isn't very effective.
"Nice" generally refers to to the priority of a process. (More positive values are lower priority, more negative values are higher priority.) You can run ps -eo nice,pid,args | grep '^\s*[1-9]' to get a list of positive nice (low priority) commands.
On a CPU graph NICE time is time spent running processes with positive nice value (ie low priority). This means that it is consuming CPU, but will give up that CPU time for most other processes. Any USER CPU time for one of the processes listed in the above ps command will show up as NICE.