Departuredatetime:2015-08-15T09:50:00 (GMToffset=10)
Arrivaldatetime:2015-08-15T06:30:00(GMToffset=-7)
FlightDuration= Arrivaldatetime-Departuredatetime
so Flithtduration =13:40 hours, it is giving,
but how it is calculating i am unable to understand, according to which formula it is giving 13:40 hours,could you pls help and explain.
Thanks in advance.
2015-08-15T09:50:00 (GMToffset=10) = 2015-08-14T23:50:00Z
2015-08-15T06:30:00(GMToffset=-7) = 2015-08-15T13:30:00Z
= 13h40m
I think you need to brush up on your time zone concepts a bit, because there's nothing wrong with the calculation itself. If you're feeding it bad data it will return bad results.
Related
I'm reading (JSON) weatherdata into a small delphi-application. The winddirection is represented by a floatvalue from 0-360. What i want is to calculate this value into 8 directions (N,NE,E,NW,S,SE,W,SW) on the compass and show them on my applicationform as a arrowsymbol. I can use a lot of if..then to solve this, but it would be much cleaner code to just calculate it. My mathematical skills is not what they used to be, so i hope some of you coluld help me? Thanks.
Not deplhi but perhaps something like this?
winds=["N","NE","E","SE","S","SW","W","NW","N"]
wind_={WIND_IN_DEGREES}
index=int(round(wind_/45,0))
print(winds[index])
PREFACE This is a question about using linear modelling to understand an electricity generation system but you actually don't need to know very much of either to understand this. I'm pretty sure this is a question about R.
I am building a linear model to optimise the dispatch, hourly, of electric generators in a country (called "Lebanon" but actually it's a little fictitious in terms of the data I am using). I have a model which optimises the hourly generation satisfactorily, the code looks like the below:
lp.newobjfun.norelax <- lpSolve::lp(dir = "min", objfun.lebanon.postwalk1, constraintmatrix.lebanon.postwalk.allgenerators, directions.lebanon.postwalk3, rhs.lebanon.postwalk4)
The above works fine. Of course though, doing it per day is a bit useless, so instead I want to be able to run it iteratively every day for a year. The below code is supposed to do that, but instead the returned values (the objective function's value) is always 0. Any ideas what I am doing wrong?
for(i in 1:365)
{
rhs.lebanon.postwalk4[1:24] = as.numeric(supplylebanon2010wholeyear[i,])
lp.newobjfun.norelax <- lpSolve::lp(dir = "min", objfun.lebanon.postwalk1, constraintmatrix.lebanon.postwalk.allgenerators, directions.lebanon.postwalk3, rhs.lebanon.postwalk4)
print(lp.newobjfun.norelax$solution);
}
Just to be clear, in the second version, the right hand side of the first 24 constraints are modified to relfect how the hourly supply of electricity changes each day of the year.
Thanks in advance!
Okay, nevermind I've figured this out it's that there's a unit conversion from kWh to MWh which I hadn't taken care of.
Sorry for any bother!
I am using the set.seed and kmeans function. Although I use set.seed my cluster centers keep changing but my data isn't. And, it only changes from day to day, not daily. So, within the same day there aren't any changes, but the next day my clusters will change. I'm assuming the set.seed function is causing this. If so, does anyone know how to set randomness within kmeans or similar function? Can someone give me some insight. Sample code below:
set.seed(1234)
ITsegment2 <- kmeans(iTeller_z, 4)
There is probably something more clever but here an easy solution :
set.seed(as.numeric(Sys.Date()))
Sys.Date() returns the today date, as.numeric transform's it into a number...So the number will change every day .
Cheers
I have not worked with SPSS (.sav) files before and am trying to work with some data files provided to me by importing them into R. I did not receive any explanation of the files, and because communication is difficult I am trying to figure out as much as I can on my own.
Here's my first question. This is what the Date field looks like in an R data frame after import:
> dataset2$Date[1:4]
[1] 13608172800 13608259200 13608345600 13608345600
I don't know what dates the data is supposed to be for, but I found that if I divide the above numbers by 10, that seems to give a reasonable date (in February 2013). Can anyone confirm this is indeed what the above represents?
My second question is regarding another column called Begin_time. Here's what that looks like:
> dataset2$Begin_time[1:4]
[1] 29520 61800 21480 55080
Any idea what this is representing? I want to believe this is some representation of time of day because the records are for wildlife observations, but I haven't got more info than that to try to guess. I noticed that if I take the difference between End_Time and Begin_time I get numbers like 120 and 180, which seems like minutes to me (3 hours seems reasonable to observe a wild animal), but the absolute numbers are far greater than the number of minutes in a day (1440), so that leaves me puzzled. Is this some time keeping format from SPSS? If so, what's the logic?
Unfortunately, I don't have access to SPSS, so any help would be much appreciated.
I had the same problem and this function is a good solution:
pss2date <- function(x) as.Date(x/86400, origin = "1582-10-14")
This is where I found the answer:
http://scs.math.yorku.ca/index.php/R:_Importing_dates_from_SPSS
Dates in SPSS Statistics are represented as floating point doubles holding the number of seconds since Oct 1, 1582. If you use the SPSS R plugin apis, they can be automatically converted to R dates, but any proper converter should be able to do this for you.
I have quite an interesting task at work - I need to find out how much time user spent doing something and all I have is timestamps of his savings. I know for a fact that user saves after each small portion of a work, so they is not far apart.
The obvious solution would be to find out how much time one small item could possibly take and then just go through sorted timestamps and if the difference between current one and previous one is more than that, it means user had a coffee break, and if it's less, we can just add up this difference into total sum. Simple example code to illustrate that:
var prev_timestamp = null;
var total_time = 0;
foreach (timestamp in timestamps) {
if (prev_timestamp != null) {
var diff = timestamp - prev_timestamp;
if (diff < threshold) {
total_time += diff;
}
}
prev_timestamp = timestamp;
}
The problem is, while I know about how much time is spent on one small portion, I don't want to depend on it. What if some user just that much slower than my predictions, I don't want him to be left without paycheck. So I was thinking, could there be some clever math solution to this problem, that could work without knowledge of what time interval is acceptable?
PS. Sorry for misunderstanding, of course no one would pay people based on this numbers and even if they would, they understand that it is just an approximation. But I'd like to find a solution that would emit numbers as close to real life as possible.
You could get the median TimeSpan, and then discard those TimeSpans which are off by, say >50%.
But this algorithm should IMHO only be used to get estimated spent hours per project, not for payrolls.
You need to either look at the standard deviation for the group of all users or the variance in the intervals for a single user or better a combination of the two for your sample set.
Grab all periods and look at the average? If some are far outside the average span you could discard them or use an adjusted value for them in the average.
I agree with Groo that using something based only on the 'save' timestamp is NOT what you should do - it will NEVER provide you with the actual time spent on the tasks.
The clever math you seek is called "standard deviation".