I am writing a program in Fortran and I need a way of calculating the duration of the program down to milliseconds. I have been using the function "date_and_time", which leaves me with an array containing the system's time in hours, minutes, seconds, and milliseconds.
I believe that I can call this function at the start of my program to store the current time, then call the function again at the end of my program to store the latest time. But after that, how would I computer the duration? I tried just subtracting the values, but the milliseconds reset when one second passes, just like the seconds reset when one minute passes. How would be best way to approach this be?
Here is the program:
PROGRAM TEST_TIME_AND_DATE
INTEGER I
REAL J
INTEGER TIME_A(8), TIME_B(8)
CALL DATE_AND_TIME(VALUES=TIME_A)
PRINT '(8I5))', TIME_A
DO I = 0, 400000000
J = I * I - J
END DO
CALL DATE_AND_TIME(VALUES=TIME_B)
print '(8I5))', TIME_B
END PROGRAM TEST_TIME_AND_DATE
And here is the result:
2011 6 11 -300 9 14 49 304
2011 6 11 -300 9 14 50 688
I'm not sure what to do here, thanks.
If you want elapsed clock time, it would be simpler to use the intrinsic procedure system_clock since it provides a single time-value output. (There are additional arguments to provide information about the procedure, which is why it is a procedure instead of a function.) See, for example, http://gcc.gnu.org/onlinedocs/gfortran/SYSTEM_005fCLOCK.html. If you want to time the CPU usage, then use cpu_time. For either, two calls, at the start and end of the program, then a simple difference. You can use the COUNT_RATE argument to convert to integer count of time into seconds.
You can subtract the numbers, then convert everything into milliseconds and sum up the ms, sec in ms, min in ms, hrs in ms, ...
In your case this would be
0 + 0 + 0 + 0 + 0 + 1*1000 + 384 = 1384 [ms]
This approach works fine also with overflows since a positive number in a left-more column outweights negative numbers if they are all converted to the same basis. E.g. 0:58.000 to 1:02.200 yields
1 * 60000 + (-56) * 1000 + 200 = 4200
Please note that this does work up to days but not with months since they do not share a common length.
You could calculate the offset from some starting time (Jan 1, 1970 for UNIX) in seconds or milliseconds. The difference in those numbers is your elapsed time.
(2011 - 1970) * (number of seconds in a year) +
(month of the year - 1) * (number of seconds in a month) +
(day of the month - 1) * (number of seconds in a day) +
( ... )
Related
Dealing with a bit of a head scratcher. This is more of a logic based issue rather than actual Power BI code. Hoping someone can help out! Here's the scenario:
Site
Shift Num
Start Time
End Time
Daily Output
A
1
8:00AM
4:00PM
10000
B
1
7:00AM
3:00PM
12000
B
2
4:00PM
2:00AM
7000
C
1
6:00AM
2:00PM
5000
This table contains the sites as well as their respective shift times. The master table above is part of an effort in order to capture throughput data from each of the following sites. This master table is connected to tables with a running log of output for each site and shift like so:
Site
Shift Number
Output
Timestamp
A
1
2500
9:45 AM
A
1
4200
11:15 AM
A
1
5600
12:37 PM
A
1
7500
2:15 PM
So there is a one-to-many relationship between the master table and these child throughput tables. The goal is to create use a gauge chart with the following metrics:
Value: Latest Throughput Value (Latest Output in Child Table)
Maximum Value: Throughput Target for the Day (Shift Target in Master Table)
Target Value: Time-dependent project target
i.e. if we are halfway through Site A's shift 1, we should be at 5000 units: (time passed in shift / total shift time) * shift output
if the shift is currently in non-working hours, then the target value = maximum value
Easy enough, but the problem we are facing is the target value erroring for shifts that cross into the next day (i.e. Site B's shift 2).
The shift times are stored as date-independent time values. Here's the code for the measure to get the target value:
Var CurrentTime = HOUR(UTCNOW()) * 60 + MINUTE(UTCNOW())
VAR ShiftStart = HOUR(MAX('mtb MasterTableUTC'[ShiftStartTimeUTC])) * 60 + MINUTE(MAX('mtb MasterTableUTC'[ShiftStartTimeUTC]))
VAR ShiftEnd = HOUR(MAX('mtb MasterTableUTC'[ShiftEndTimeUTC])) * 60 + MINUTE(MAX('mtb MasterTableUTC'[ShiftEndTimeUTC]))
VAR ShiftDiff = ShiftEnd - ShiftStart
Return
IF(CurrentTime > ShiftEnd || CurrentTime < ShiftStart, MAX('mtb MasterTableUTC'[OutputTarget]),
( (CurrentTime - ShiftStart) / ShiftDiff) * MAX('mtb MasterTableUTC'[OutputTarget]))
Basically, if the current time is outside the range of the shift, it should have the target value equal the total shift target, but if it is in the shift time, it calculates it as a ratio of time passed within the shift. This does not work with shifts that cross midnight as the shift end time value is technically earlier than the shift start time value. Any ideas on how to modify the measure to account for these shifts?
I am trying to determine the interval between two dates that I create using DateComponents. If I make the first date 1 year prior to the second, I get 365 days, 0 hours, 0 minutes and 0 seconds. If I make the dates further apart (400 years here), suddenly my date is off by 11 minutes 56 seconds. Here is the code:
import Foundation
var mycal = Calendar(identifier: .iso8601)
var datum = DateComponents(year:1600, month:1, day:1, hour:12, minute:0,
second:0)
let j2000 = DateComponents(year:2000, month:1, day:1, hour:12, minute:0,
second:0)
let datum_date = mycal.date(from: datum)
let j2000_date = mycal.date(from: j2000)
let interval = mycal.dateComponents([.day, .hour, .minute, .second], from:j2000_date!, to:datum_date!)
print("Datum: \(datum_date!)") //1600-01-01 19:48:04 +0000
print("j2000: \(j2000_date!)") //2000-01-01 20:00:00 +0000
Note the next-to-last line: Comments show what the print produces. I've tried it with the Gregorian calendar too, same problem. I'm not sure exactly how far back the inconsistency occurs, I've gone back far enough to produce and it sometimes seems to "stick" as I change the code moving closer in time again. Strangely, the "interval" appears to show the correct amount of days(here -146097), but the date shown is incorrect and I will likely need that in my calculations. Anyone have any ideas?
The difference could be related to leep year adjustments but that would give a difference of 11 minutes and 14 seconds (there'd still be 40 seconds unaccounted for, 26 of which could be leep seconds).
see: https://www.infoplease.com/leap-year-101-next-when-list-days-calendar-years-calculation-last-rules
In Theory, if you compute a multi-year time difference with a precision of minutes and seconds, you should get variations of 5 hours 48 minutes and 46 seconds every 3 out of four years and get within 11 minutes and 14 seconds on the fourth year. I don't know how macOS (Unix) deals with that there there is probably a bunch of considerations that they need to take into account (especially beyond 400 year where that 11 minutes 14 seconds gets adjusted).
If that level of precision is required by your use case, I would suggest reading up on the minute details of time calculations. Given that dates are stored internally as a number of seconds, going back to a precise day and time over these long periods must require some special math acrobatics.
See Apple's documentation here: https://developer.apple.com/reference/foundation/nscalendar
I got this math problem. I am trying to calculate the max amount of samples when the response time is zero. My test has 3 samples (HTTP Request). The total test wait time is 11 seconds. The test is run for 15 minutes and 25 seconds. The ramp up is 25 seconds, this means that for every second 2 users are created till we reach 50.
Normally you have to wait for the server to respond, but I am trying to calculate the max amount of samples (this means response time is zero.) How do i do this. I can't simply do ((15 * 60 + 25) / 11) * 50. Because of the ramp up.
Any ideas?
EDIT:
Maybe I should translate this problem into something generic and not specific to JMeter So consider this (maybe it will make sense to me aswel ;)).
50 people are walking laps around the park. Each lap takes exactly 11 seconds to run. We got 15 minutes and 25 seconds to walk as many as possible laps. We cannot start all at the sametime but we can start 2 every second (25seconds till we are all running). How many laps can we run?
What i end up doing was manually adding it all up...
Since it takes 25s to get up to full speed, only 2 people can walk for 900s and 2 people can walk for 901s and 2 people can walk for 902s all the way to total of 50 people..
Adding that number together should give me my number i think.
If I am doing something wrong or based on wrong conclusion I like to hear your opinion ;). Or if somebody can see a formula.
Thanks in advance
I have no idea about jmeter, but I do understand your question about people running round the park :-).
If you want an exact answer to that question which ignores partial laps round the park, you'll need to do (in C/java terminology) a for loop to work it out. This is because to ignore partial laps it's necessary to round down the number of possible laps, and there isn't a simple formula that's going to take the rounding down into account. Doing that in Excel, I calculate that 4012 complete laps are possible by the 50 people.
However, if you're happy to include partial laps, you just need to work out the total number of seconds available (taking account of the ramp up), then divide by the number of people starting each second, and finally divide by how many seconds it takes to run the lap. The total number of seconds available is an arithmetic progression.
To write down the formula that includes partial laps, some notation is needed:
T = Total number of seconds (i.e. 900, given that there are 15 minutes)
P = number of People (i.e. 50)
S = number of people who can start at the Same time (i.e. 2)
L = time in seconds for a Lap (i.e. 11)
Then the formula for the total number of laps, including partial laps is
Number of Laps = P * (2 * T - (P/S - 1)) / (2*L)
which in this case equals 4036.36.
Assume we're given:
T = total seconds = 925
W = walkers = 50
N = number of walkers that can start together = 2
S = stagger (seconds between starting groups) = 1
L = lap time = 11
G = number of starting groups = ceiling(W/N) = 25
Where all are positive, W and N are integers, and T >= S*(G-1) (i.e. all walkers have a chance to start). I am assuming the first group start walking at time 0, not S seconds later.
We can break up the time into the ramp period:
Ramp laps = summation(integer i, 0 <= i < G, N*S*(G-i-1)/L)
= N*S*G*(G-1)/(2*L)
and the steady state period (once all the walkers have started):
Steady state laps = W * (T - S*(G-1))/L
Adding these two together and simplifying a little, we get:
Laps = ( N*S*G*(G-1)/2 + W*(T-S*(G-1)) ) / L
This works out to be 4150 laps.
There is a closed form solution if you're only interested in full laps. If that's the case, just let me know.
I am reading the book Programming Game AI by Example, and he gives code for
a steering behaviour which causes the entity to decelerate so that it arrives
gracefully at a target. After calculating dist, the distance from target to
source he then (essentially) does this
double speed = dist/deceleration;
I just cannot understand where this comes from however, am I just missing something
really obvious? It is not listed as a known error in the book so I am guessing it
is correct.
If there was some physical truth to this, the units would have match up on either side.
From what I understand, this is akin to Zeno's paradoxes where you are trying to reach something, but you never get there because you always only travel one nth of the remaining distance.
Suppose
the simulation proceeds at intervals of one second at a time.
deceleration = 5
distance = 1000 meters
With these initial conditions, speed will be set to 200 meters per second. Because the simulation proceeds at intervals of one second, we will travel exactly 200 meters (i.e. one fifth of the remaining distance), and end up at a distance of 800 meters from the target. The new speed is determined to be: 160 meters per second
Here is what happens in the first 30 seconds:
The last 30 seconds:
The last 10 seconds:
Observations
Within the first 30 seconds, we travel roughly 998 meters
Within the first 50 seconds, we cover 999.985 meters
Within the last 10 seconds, we cover only ~1.2cm
As you can see, you get almost there very quickly, but it takes a long time to get close.
Plots by WolframAlpha
Maybe there is something missing in your calculation. For a constant accelaration (or decelleration), and ignoring initial condictions, the speed is
v = a * t
and the distance is
d = a * t^2 / 2
If you eliminate t in both equations you get
v = a * sqrt(2 * d / a)
I am interacting with a Remote Server. This Remote Server is in a different Time Zone. Part of the Authentication requires me to produce the:
"The number of seconds since January 1, 1970 00:00:00 GMT
The server will only accept requests where the timestamp
is within 600s of the current time"
The documentation of erlang:now(). reveals that it can get me the the elapsed time since 00:00 GMT, January 1, 1970 (zero hour)
on the assumption that the underlying OS supports this. It returns a size=3 tuple, {MegaSecs, Secs, MicroSecs}. I tried using element(2,erlang:now()) but the remote server sends me this message:
Timestamp expired: Given timestamp (1970-01-07T14:44:42Z)
not within 600s of server time (2012-01-26T09:51:26Z)
Which of these 3 parameters is the required number of seconds since Jan 1, 1970 ? What aren't i doing right ? Is there something i have to do with the universal time as in calendar:universal_time() ? UPDATEAs an update, i managed to switch off the time-expired problem by using this:
seconds_1970()->
T1 = {{1970,1,1},{0,0,0}},
T2 = calendar:universal_time(),
{Days,{HH,Mins,Secs}} = calendar:time_difference(T1,T2),
(Days * 24 * 60 * 60) + (HH * 60 * 60) + (Mins * 60) + Secs.
However, the question still remains. There must be a way, a fundamental Erlang way of getting this, probably a BIF, right ?
You have to calculate the UNIX time (seconds since 1970) from the results of now(), like this:
{MegaSecs, Secs, MicroSecs} = now().
UnixTime = MegaSecs * 1000000 + Secs.
Just using the second entry of the tuple will tell you the time in seconds since the last decimal trillionellium (in seconds since the UNIX epoch).
[2017 Edit]
now is deprecated, but erlang:timestamp() is not and returns the same format as now did.
Which of these 3 parameters is the required number of seconds since Jan 1, 1970 ?
All three of them, collectively. Look at the given timestamp. It's January 7, 1970. Presumably Secs will be between 0 (inclusive) and 1,000,000 (exclusive). One million seconds is only 11.574 days. You need to use the megaseconds as well as the seconds. Since the error tolerance is 600 seconds you can ignore the microseconds part of the response from erlang:now().