How to use scale() in Interpolations.jl? - julia

I'm interested in the fastest way to linearly interpolate a 1D function on regularly spaced data.
I don't quite understand how to use the scale function in Interpolations.jl:
using Interpolations
v = [x^2 for x in 0:0.1:1]
itp=interpolate(v,BSpline(Linear()),OnGrid())
itp[1]
# 0.0
itp[11]
# 1.0
scale(itp,0:0.1:1)
itp[0]
# -0.010000000000000002
# why is this not equal to 0.0, i.e. the value at the lowest index?

the function does not alter the object, as would be by scale!.
julia> sitp = scale(itp,0:0.1:1)
11-element Interpolations.ScaledInterpolation{Float64,1,Interpolations.BSplineInterpolation{Float64,1,Array{Float64,1},Interpolations.BSpline{Interpolations.Linear},Interpolations.OnGrid,0},Interpolations.BSpline{Interpolations.Linear},Interpolations.OnGrid,Tuple{FloatRange{Float64}}}:
julia> sitp[0]
0.0
thanks to spencerlyon for pointing that out.

Related

R - spatstat: Calculate density for a new point

Is it possible to use spatstat to estimate the intensity function for a give ppp object and calculate its value considering a new point? For example, can I evaluate D at new_point:
# packages
library(spatstat)
# define a random point within Window(swedishpines)
new_point <- ppp(x = 45, y = 45, window = Window(swedishpines))
# estimate density
(D <- density(swedishpines))
#> real-valued pixel image
#> 128 x 128 pixel array (ny, nx)
#> enclosing rectangle: [0, 96] x [0, 100] units (one unit = 0.1 metres)
Created on 2021-03-30 by the reprex package (v1.0.0)
I was thinking that maybe I can superimpose() the two ppp objects (i.e. swedishpines and new_point) and then run density setting at = "points" and weights = c(rep(1, points(swedishpines)), 0) but I'm not sure if that's the suggested approach (and I'm not sure if the appended point is ignored during the estimation process).
I know that it may sound like a trivial question, but I read some docs and didn't find an answer or a solution.
There are two ways to do this.
The first is simply to take the pixel image of intensity, and extract the pixel values at the desired locations using [:
D <- density(swedishpines)
v <- D[new_points]
See the help for density.ppp and [.im.
The other way is to use densityfun:
f <- densityfun(swedishpines)
v <- f(new_points)
See the help for densityfun.ppp
The first route is more efficient and the second way is more accurate.
Technical issue: if some of the new_points could lie outside the window of swedishpines then the value at these points is (mathematically) undefined. Both of the methods described above will simply ignore such points, and the resulting vector v will be shorter than the number of new points. If you need to handle this continengcy, the easiest way is to use D[new_points, drop=FALSE] which returns NA values for such locations.

Values on a simulated bell curve in golang

My math is a bit elementary so I apologize for any assumptions in advance.
I want to fetch values that exist on a simulated bell curve. I don't want to actually create a bell curve or plot one, I'd just like to use a function that given an input value can tell me the corresponding Y axis value on a hypothetical bell curve.
Here's the full problem statement:
I am generating floating point values between 0.0 and 1.0.
0.50 represents 2.0 on the bell curve, which is the maximum. The values < 0.50 and > 0.50 start dropping on this bell curve, so for example 0.40 and 0.60 are the same and could be something like 1.8. 1.8 is arbitrarily chosen for this example, and I'd like to know how I can tweak this 'gradient'.
Right now Im doing a very crude implementation, for example, for any value > 0.40 and < 0.60 the function returns 2.0, but I'd like to 'smooth' this and gain more 'control' over the descent/gradient
Any ideas how I can achieve this in Go
Gaussian function described here : https://en.wikipedia.org/wiki/Gaussian_function
has a bell-curve shape. Example of implementation :
package main
import (
"math"
)
const (
a = 2.0 // height of curve's peak
b = 0.5 // position of the peak
c = 0.1 // standart deviation controlling width of the curve
//( lower abstract value of c -> "longer" curve)
)
func curveFunc(x float64) float64 {
return a *math.Exp(-math.Pow(x-b, 2)/(2.0*math.Pow(c, 2)))
}

Julia: Syntax for indexing into an n-dimensional array without fixing n

I'm trying to use Julia's Interpolations package to interpolate a function sampled on an n-dimensional grid. The Interpolations package uses a syntax similar to array indexing for specifying the point at which to interpolate the data (in fact it appears that the Interpolations package imports the getindex function used for array indexing from Base). For example for n=2 the following code:
using Interpolations
A_grid = [1 2; 3 4]
A = interpolate((0:1, 0:1), A_grid, Gridded(Linear()))
a = A[0.5, 0.5]
println(a)
prints the result of a linear interpolation at the midpoint (0.5, 0.5).
Now, if I have an n-dimensional vector (e.g. index_vector = [0.5, 0.5] in n=2 dimensions), I see that I can get the same result by writing
a = A[index_vector[1], index_vector[2]]
but I am unable to do this in general. That is, I would like to find/write a function that takes an n-dimensional array A and a vector index_vector of length n and returns
A[index_vector[1], ... , index_vector[n]]
where n is not known beforehand. Is there a way to do this when the entries of index_vector are not necessarily integers?
I think you can use the splat (...) for that. ... "distributes" the elements of a collection to argument slots:
using Interpolations
A_grid = [1 2; 3 4]
A = interpolate((0:1, 0:1), A_grid, Gridded(Linear()))
index_vector = [0.5, 0.5]
a = A[index_vector...]
println(a)
gives the same result of your example.

Correct usage of Interpolations.jl outside the domain

I'm porting a Matlab code into julia and so far i'm having amazing results:
A code that in Matlab runs in more than 5 hours, julia does it in a little more than 8 minutes! however i have a problem...
In matlab i have:
for xx=1:xlong
for yy = 1:ylong
U_alturas(xx,yy,:) = interp1(squeeze(NivelAltura_int(xx,yy,:)),squeeze(U(xx,yy,:)), interpolar_a);
V_alturas(xx,yy,:) = interp1(squeeze(NivelAltura_int(xx,yy,:)),squeeze(V(xx,yy,:)), interpolar_a);
end
end
that produces NaNs whenever a point in interpolar_a is outside the range in NivelAltura_int.
In Julia i'm trying to do the same with:
for xx in 1:xlong
for yy in 1:ylong
AltInterp = interpolate((Znw_r,),A_s_c_r,Gridded(Linear()));
NivelAltura_int[xx,yy,1:end] = AltInterp[Znu[1:end]]
Uinterp = interpolate((squeeze(NivelAltura_int[xx,yy,1:end],(1,2)),),squeeze(U[xx,yy,1:end],(1,2)),Gridded(Linear()));
Vinterp = interpolate((squeeze(NivelAltura_int[xx,yy,1:end],(1,2)),),squeeze(V[xx,yy,1:end],(1,2)),Gridded(Linear()));
U_alturas[xx,yy,1:end] = Uinterp[Alturas[1:end]];
V_alturas[xx,yy,1:end] = Vinterp[Alturas[1:end]];
end
end
using the package Interpolations.jl. Whenever the point is outside the domain, this package extrapolates, which is incorrect for my purposes.
I can add a few lines of code that check and substitutes the values outside the domain with NaNs, but i believe it would add some time to the computation and is not very elegant.
In the documentation of the package, it mentions a kind of object like this:
Uextrap = extrapolate(Uinterp,NaN)
To control the behavior outside the domain, but i haven't find how to use it, i've tried adding it under Uinterp, i've tried evaluating it but it, naturally, won't work that way.
Could you help me on this one?
Thanks!
It looks like you may be running into two issues here. First, there's been some recent work on gridded extrapolations (#101) that may not be in the tagged version yet. If you're willing to live on the edge, you can Pkg.checkout("Interpolations") to use the development version (Pkg.free("Interpolations") will put you back on the stable version again).
Secondly, it looks like there's a still a missing method for vector-valued gridded extrapolations (issue #24):
julia> using Interpolations
itp = interpolate((collect(1:10),), collect(.1:.1:1.), Gridded(Linear()))
etp = extrapolate(itp, NaN);
julia> etp[.5:1:10.5]
ERROR: BoundsError: # ...
in throw_boundserror at abstractarray.jl:156
in getindex at abstractarray.jl:488
As you can see, it's trying to use the generic definitions for all abstract arrays, which will of course throw bounds errors. Interpolations just needs to add its own definition.
In the mean time, you can use a comprehension with scalar indexing:
julia> [etp[x] for x=.5:1:10.5]
11-element Array{Any,1}:
NaN
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
NaN
The following sample (refer) shows how extrapolate works:
Preparation:
using Interpolations
f(x) = sin((x-3)*2pi/9 - 1)
xmax = 10
A = Float64[f(x) for x in 1:xmax] # domain .EQ. 1:10
itpg = interpolate(A, BSpline(Linear()), OnGrid())
The itpg object extrapolates outside points conforming its interpolation type:
itpg[2] # inside => -0.99190379965505
itpg[-2] # outside => 0.2628561875219271
Now we use extrapolat object to control extrapolation behavior:
etpg = extrapolate(itpg, NaN);
etpg[2]==itpg[2] # same result when point is inside => true
isnan(etpg[-2]) # NaN when the point is outside => true
So an extrapolate object does interpolation conforming its parent while extrapolates in a custom manner.

Average 3D paths

I have two paths in 3D and I want to "average" them, if there's such a thing.
I have the xyz pairs timestamped at the time they were sampled:
ms x y z
3 0.1 0.2 0.6
12 0.1 0.2 1.3
23 2.1 4.2 0.3
55 0.1 6.2 0.3
Facts about the paths:
They all start and end on/near the same xyz point.
I have the total duration it took to complete the path as well as individual vertices
They have different lengths (i.e. different number of xyz pairs).
Any help would be appreciated.
A simple method is the following...
First build a function interp(t, T, waypoints) that given the current time t, the total path duration T and the path waypoints returns the current position. This can be done using linear interpolation or more sophisticated approaches to avoid speed or acceleration discontinuities.
Once you have interp the average path can be defined as (example in python)
def avg(t, T1, waypoints1, T2, waypoints2):
T = (T1 + T2) / 2
return middlePoint(interp(t*T1/T, T1, waypoints1),
interp(t*T2/T, T2, waypoints2))
the duration of the average path will be the average T = (T1 + T2) / 2 of the two durations.
It's also easy to change this approach to make a weighted average path.
In R, the distances between consecutive points in that series assuming it is in a dataframe named "dat"
would be:
with(dat, sqrt(diff(x)^2 +diff(y)^2 +diff(z)^2) )
#[1] 0.700000 4.582576 2.828427
There are a couple of averages I could think of average distance in interval, average distance traveled per unit time. Depends on what you want. This gives the average velocity in the three intervals:
with(dat, sqrt(diff(x)^2 +diff(y)^2 +diff(z)^2) /diff(ms) )
#[1] 0.07777778 0.41659779 0.08838835
There is definitely such a thing. For each point on path A, find the point that correponds to your current point on path B, and then find the mid-point between those corresponding verticies. You will then get a path in-between the two that is the "average" of the two paths. If you have a mis-match where you did not sample the two paths the same, then for an interior point on path A (i.e., not the end-point), find the two closest sampled points with a similar time-sampling on path B, and locate the mid-point of the triangle those three points will make.
Now since you've discreetized your path by sampling it, this "average" is only going to be an approximation, not a "true" average like you could do by solving for the average function between two differentiable parametric functions defined by r(t) = <x(t), y(t), z(t)>.
Expanding on #6502's answer.
If you wish to retrieve a list of points that would make up the average path, you could sample the avg function at the instances of the individual input points. (Stretched toward the average length)
def avg2(T1, waypoints1, T2, waypoints2):
# Collect the times we want to sample at
T = (T1 + T2) / 2
times = []
times.extend(t*T/T1 for (t,x,y) in waypoints1) # Shift the time towards
times.extend(t*T/T2 for (t,x,y) in waypoints2) # the average
times.sort()
result = []
last_t = None
for t in times:
# Check if we have two points in close succession
if last_t is not None and last_t + 1.0e-6 >= t:
continue
last_t = t
# Sample the average path at this instance
x, y = avg(t, T1, waypoints1, T2, waypoints2)
yield t, x, y

Resources