timestamp operation in sqlalchemy - datetime

I am trying to compare two timestamps, say x and y, where y should be current timestamp minus a certain number of seconds s, for example y = sa.func.current_timestamp() - s. But seems that y is not a timestamp. How can I convert that to timestamp?
example code
def foo(x, y):
# x is a timestamp, y is number of seconds
y = sa.func.current_timestamp() - y
return x > y

Related

"Sapply" function in R counterpart in MATLAB to convert a code from R to MATLAB

I want to convert the code in R to MATLAB (not to executing the R code in MATLAB).
The code in R is as follows:
data_set <- read.csv("lab01_data_set.csv")
# get x and y values
x <- data_set$x
y <- data_set$y
# get number of classes and number of samples
K <- max(y)
N <- length(y)
# calculate sample means
sample_means <- sapply(X = 1:K, FUN = function(c) {mean(x[y == c])})
# calculate sample deviations
sample_deviations <- sapply(X = 1:K, FUN = function(c) {sqrt(mean((x[y == c] - sample_means[c])^2))})
To implement it in MATLAB I write the following:
%% Reading Data
% read data into memory
X=readmatrix("lab01_data_set(ViaMatlab).csv");
% get x and y values
x_read=X(1,:);
y_read=X(2,:);
% get number of classes and number of samples
K = max(y_read);
N = length(y_read);
% Calculate sample mean - 1st method
% funct1 = #(c) mean(c);
% G1=findgroups(y_read);
% sample_mean=splitapply(funct1,x_read,G1)
% Calculate sample mean - 2nd method
for m=1:3
sample_mean(1,m)=mean(x(y_read == m));
end
sample_mean;
% Calculate sample deviation - 2nd method
for m=1:3
sample_mean=mean(x(y_read == m));
sample_deviation(1,m)=sqrt(mean((x(y_read == m)-sample_mean).^2));
sample_mean1(1,m)=sample_mean;
end
sample_deviation;
sample_mean1;
As you see I get how to use a for loop in MATLAB instead of sapply in R (as 2nd method in code), but do not know how to use a function (Possibly splitaplly or any other).
PS: Do not know how to upload the data, so sorry for that part.
The MATLAB equivalent to R sapply is arrayfun - and its relatives cellfun, structfun and varfun depending on what data type your input is.
For example, in R:
> sapply(1:3, function(x) x^2)
[1] 1 4 9
is equivalent to MATLAB:
>>> arrayfun(#(x) x^2, 1:3)
ans =
1 4 9
Note that if the result of the function you pass to arrayfun, cellfun etc. doesn't have identical type or size for every input, you'll need to specify 'UniformOutput', 'false' .

Plotting timedeltas in matplotlib, Python

I'm not exprienced with date manipulations in Python so plotting timedeltas is obnoxious problem.
I have a list of dates:
Time = ['2004-09-21 01:15:53', '2004-09-21 20:49:47', '2004-09-18 09:54:32',...]
Let's define:
def __datetime(date_str):
return datetime.strptime(date_str, '%Y-%m-%d %H:%M:%S')
Then:
dates = [__datetime(x) for x in Time]
Now I can subtract dates:
delta = []
for i in range(0, 20):
j = dates[i+1]-dates[i]
delta.append(j)
delta looks like this:
[datetime.timedelta(0, 70434), datetime.timedelta(-4, 47085),...]
Alternatively, it can be respresented as:
delta = [k.__str__() for k in delta]
['19:33:54', '-4 days, 13:04:45', '775 days, 12:07:02',...]
My question is: How I can plot these deltas on yaxis and get sensible lables?

Extract rows / columns of a matrix into separate variables

The following question came up in my course yesterday:
Suppose I have a matrix M = rand(3, 10) that comes out of a calculation, e.g. an ODE solver.
In Python, you can do
x, y, z = M
to extract the rows of M into the three variables, e.g. for plotting with matplotlib.
In Julia we could do
M = M' # transpose
x = M[:, 1]
y = M[:, 2]
z = M[:, 3]
Is there a nicer way to do this extraction?
It would be nice to be able to write at least (approaching Python)
x, y, z = columns(M)
or
x, y, z = rows(M)
One way would be
columns(M) = [ M[:,i] for i in 1:size(M, 2) ]
but this will make an expensive copy of all the data.
To avoid this would we need a new iterator type, ColumnIterator, that returns slices? Would this be useful for anything other than using this nice syntax?
columns(M) = [ slice(M,:,i) for i in 1:size(M, 2) ]
and
columns(M) = [ sub(M,:,i) for i in 1:size(M, 2) ]
They both return a view, but slice drops all dimensions indexed with
scalars.
A nice alternative that I have just found if M is a Vector of Vectors (instead of a matrix) is using zip:
julia> M = Vector{Int}[[1,2,3],[4,5,6]]
2-element Array{Array{Int64,1},1}:
[1,2,3]
[4,5,6]
julia> a, b, c = zip(M...)
Base.Zip2{Array{Int64,1},Array{Int64,1}}([1,2,3],[4,5,6])
julia> a, b, c
((1,4),(2,5),(3,6))

How to calculate geographical distance between multiple individuals at same timestep in R

I have a matrix (22467 rows and 4 columns) of x and y GPS locations (decimal degrees) for multiple time steps (one hour) for multiple individuals (ID, n=13). Example of dataset (saved as csv file):
ID Time x y
98427 01:00 43.97426 -59.56677
98427 02:00 43.97424 -60.56970
98428 01:00 43.97434 -60.52222
98428 02:00 43.97435 -59.24356
98429 01:00 43.97657 -59.36576
98429 02:00 43.97432 -59.98674
I would like to calculate the distance between each individual, for all combinations, at each time step. Thus, at Time = 01:00, distance between 98427 and 98428, 98427 and 98429, 98428 and 98429, etc. How can I do this in R?
library(plyr)
data = iris
data = data[c(1:5, 81:85, 141:145), 3:5]
data$time = rep(1:5, 3)
dlply(data, .(time), function(x) {dist(x[ , 1:2])})
I just played with iris dataset, but methodology is very simillar.
1. Split the data by time
2. Use column x and y and pass to dist() function which returns matrix of distances
3. Store each as list
Then you can pull values from list, which has each entry named as time.
Update : Sorry about naively thinking as euclidean distance.
Here's somewhat crude implementation of Haversine distance.
library(geosphere)
havdist = function(x) {
n = dim(x)[1]
res = matrix(NA, nrow = n, ncol = n)
for (i in 1:n) {
k = 1
for (j in k:n) {
res[i, j] = res[j, i] = distHaversine(a[i, ], a[j, ])
}
n = n - 1
k = k + 1
}
return(res)
}
Then supply havdist instead of dist in above dlply method.

Merging two vectors at random in R

I have two vectors x and y. x is a larger vector compared to y. For example (x is set to all zeros here, but that need not be the case)
x = rep(0,20)
y = c(2,3,-1,-1)
What I want to accomplish is overlay some y's in x but at random. So in the above example, x would look like
0,0,2,3,-1,-1,0,0,0,0,2,3,-1,-1,...
Basically, I'll step through each value in x, pull a random number, and if that random number is less than some threshold, I want to overlay y for the next 4 places in x unless I've reached the end of x. Would any of the apply functions help? Thanks much in advance.
A simple way of doing it would be to choose points at random (the same length as x) from the two vectors combined:
sample(c(x, y), length(x), replace = TRUE)
If you want to introduce some probability into it, you could do something like:
p <- c(rep(2, each = length(x)), rep(1, each = length(y)))
sample(c(x, y), length(x), prob = p, replace = TRUE)
This is saying that an x point is twice as likely to be chosen over a y point (change the 2 and 1 in p accordingly for different probabilities).
Short answer: yes :-) . Write some function like
ranx <- runif(length(x)-length(y)+1)
# some loop or apply func...
if (ranx[j] < threshold) x[j:j+length(y)] <- y
# and make sure to stop the loop at length(y)-length(x)
Something like the following worked for me.
i = 1
while(i <= length(x)){
p.rand = runif(1,0,1)
if(p.rand < prob[i]){
p[i:(i+length(y))] = y
i = i+length(y)
}
i = i + 1
}
where prob[i] is some probability vector.

Resources