Is there such function in Julia?
Desperately trying to migrate to Julia from MATLAB, but still finding myself dependent on it...
The GeometricalPredicates package has inpolygon: https://github.com/JuliaGeometry/GeometricalPredicates.jl
You could also investigate Luxor.jl:
using Luxor
p1 = Point(0, 0)
p2 = Point(10, 0)
p3 = Point(10, 10)
p4 = Point(0, 10)
isinside(Point(5, 5), [p1, p2, p3, p4]) # true
isinside(Point(15, 5), [p1, p2, p3, p4]) # false
But make sure to check for vertex and edge exceptions...
The PolygonOps package does point-in-polygon testing too.
It's more user-friendly, but possibly slower, than GeometricalPredicates.
(Hat tip: Julia forum.)
Related
I am not familiar with java Collection sorting and reactor Flux.
Now, I got a requirement is that user need to prioritize the publishing for some of items from a Flux.
e.g.
p1 = Person{name="p1", country="US", gender="male", age=20}
p2 = Person{name="p2", country="US", gender="female", age=20}
p3 = Person{name="p3", country="Russia", gender="male", age=40}
p4 = Person{name="p4", country="China", gender="female" age=30}
p5 = Person{name="p5", country="Japan", gender="female" age=25}
p6 = Person{name="p6", country="Japan", gender="male" age=40}
A Flux is built like below:
Flux.fromIterable(Stream.of(p1, p2, p3, p4, p5, p6))
Now, we need to prioritize the females from Japan and China, the rest of person will keep orginal order.
e.g.
p5 = Person{name="p5", country="Japan", gender="female" age=25}
p6 = Person{name="p6", country="Japan", gender="male" age=40}
p4 = Person{name="p4", country="China", gender="female" age=30}
p1 = Person{name="p1", country="US", gender="male", age=20}
p2 = Person{name="p2", country="US", gender="female", age=20}
p3 = Person{name="p3", country="Russia", gender="male", age=40}
How to sort the Flux?
You can use sort method.
Flux<Person> flux = Flux.fromIterable(List.of(p1, p2, p3, p4, p5, p6));
Flux<Person> result = flux.sort(Comparator
.comparing((Person p) -> !p.getCountry().equals("Japan"))
.thenComparing((Person p) -> !p.getCountry().equals("China"))
);
result.subscribe(val -> System.out.println(val.toString()));
Output:
Person(name=p5, country=Japan, gender=female, age=25)
Person(name=p6, country=Japan, gender=male, age=40)
Person(name=p4, country=China, gender=female, age=30)
Person(name=p1, country=US, gender=male, age=20)
Person(name=p2, country=US, gender=female, age=20)
Person(name=p3, country=Russia, gender=male, age=40)
I assume you have a potentially infinite Flux, so you can't just collect everything, sort it and emit again. This would throw out Flux#sort, which would just do that.
Obviously you can't sort a sequence you do not (fully) have. What you can do is
Use Flux#groupBy and generate GroupedFluxes for every priority.
You get one Flux, which will emit a Flux for every group (prio). You can then work on each prioFlux as you like.
For example you could merge them again using Flux#mergeComparing, prioritizing one Flux over the other (if both have values, take the first - otherwise take what is there).
Use Flux#buffer and work on smaller buffers. Sort those as usual and emit again. This could be a reasonable and simple solution.
Regards,
Martin
I am new to Julia and for some reason I can't get this very simple code to work. No matter what I try, I get the error LoadError: Mutating arrays is not supported. I understand that this error occurs when I mutate an array during the course of optimization so that the code is no longer differentiable. I clearly do not understand Julia enough to see where I am doing this.
If it helps the error seems to be occurring in the line for d in dataset.
using Statistics
using Flux: onehotbatch, onehot, onecold, crossentropy, throttle
using Flux
using Base.Iterators:repeated
using Plots:heatmap
using ImageView:imshow
images = Flux.Data.MNIST.images()[1:10]
labels = Flux.Data.MNIST.labels()[1:10]
heatmap(images[4], color=:grays, aspect_ratio=1)
X = float.(reshape.(images, :))
encode(x) = onehot(x, 0:9)
Y = encode.(labels)
m = Chain(Dense(28^2, 32, relu), Dense(32, 10), softmax)
loss(x, y) = crossentropy(m(x), y)
opt = ADAM()
accuracy(x, y) = mean(onecold(m(x)) .== onecold(y))
dataset = zip(X, Y)
print(size(X))
evalcb = () -> #show(loss(X, Y))
print("Training...")
# Flux.train!(loss, params(m), dataset, opt, cb=throttle(evalcb, 5));
for d in dataset
print(d[2])
gs = gradient(params(m)) do
l = loss(d...)
end
update!(opt, params(m), gs)
end
It looks like I did have an old version of Flux (but not that old). I had to uninstall and reinstall Julia to install the new version of Flux.
I would like to plot a two variable function(s) (e_pos and e_neg in the code). Here, t and a are constants which I have given the value of 1.
My code to plot this function is the following:
t = 1
a = 1
kx = ky = range(3.14/a, step=0.1, 3.14/a)
# Doing a meshgrid for values of k
KX, KY = kx'.*ones(size(kx)[1]), ky'.*ones(size(ky)[1])
e_pos = +t.*sqrt.((3 .+ (4).*cos.((3)*KX*a/2).*cos.(sqrt(3).*KY.*a/2) .+ (2).*cos.(sqrt(3).*KY.*a)));
e_neg = -t.*sqrt.((3 .+ (4).*cos.((3)*KX*a/2).*cos.(sqrt(3).*KY.*a/2) .+ (2).*cos.(sqrt(3).*KY.*a)));
using Plots
plot(KX,KY,e_pos, st=:surface,cmap="inferno")
If I use Plots this way, sometimes I get an empty 3D plane without the surface. What am I doing wrong? I think it may have to do with the meshgrids I did for kx and ky, but I am unsure.
Edit: I also get the following error:
I changed some few things in my code.
First, I left the variables as ranges. Second, I simply computed the functions I needed without mapping the variables onto them. Here's the code:
t = 2.8
a = 1
kx = range(-pi/a,stop = pi/a, length=100)
ky = range(-pi/a,stop = pi/a, length=100)
#e_pos = +t*np.sqrt(3 + 4*np.cos(3*KX*a/2)*np.cos(np.sqrt(3)*KY*a/2) + 2*np.cos(np.sqrt(3)*KY*a))
e_pos(kx,ky) = t*sqrt(3+4cos(3*kx*a/2)*cos(sqrt(3)*ky*a/2) + 2*cos(sqrt(3)*ky*a))
e_neg(kx,ky) = -t*sqrt(3+4cos(3*kx*a/2)*cos(sqrt(3)*ky*a/2) + 2*cos(sqrt(3)*ky*a))
# Sort of broadcasting?
e_posfunc = e_pos.(kx,ky);
e_negfunc = e_neg.(kx,ky);
For the plotting I simply used the GR backend:
using Plots
gr()
plot(kx,ky,e_pos,st=:surface)
plot!(kx,ky,e_neg,st=:surface, xlabel="kx", ylabel="ky",zlabel="E(k)")
I got what I wanted!
In Julia, I want to solve a system of ODEs with external forcings g1(t), g2(t) like
dx1(t) / dt = f1(x1, t) + g1(t)
dx2(t) / dt = f2(x1, x2, t) + g2(t)
with the forcings read in from a file.
I am using this study to learn Julia and the package DifferentialEquations, but I am having difficulties finding the correct approach.
I could imagine that using a callback could work, but that seems pretty cumbersome.
Do you have an idea of how to implement such an external forcing?
You can use functions inside of the integration function. So you can use something like Interpolations.jl to build an interpolating polynomial from the data in your file, and then do something like:
g1 = interpolate(data1, options...)
g2 = interpolate(data2, options...)
p = (g1,g2) # Localize these as parameters to the model
function f(du,u,p,t)
g1,g2 = p
du[1] = ... + g1[t] # Interpolations.jl interpolates via []
du[2] = ... + g2[t]
end
# Define u0 and tspan
ODEProblem(f,u0,tspan,p)
Thanks for a nice question and nice answer by #Chris Rackauckas.
Below a complete reproducible example of such a problem. Note that Interpolations.jl has changed the indexing to g1(t).
using Interpolations
using DifferentialEquations
using Plots
time_forcing = -1.:9.
data_forcing = [1,0,0,1,1,0,2,0,1, 0, 1]
g1_cst = interpolate((time_forcing, ), data_forcing, Gridded(Constant()))
g1_lin = scale(interpolate(data_forcing, BSpline(Linear())), time_forcing)
p_cst = (g1_cst) # Localize these as parameters to the model
p_lin = (g1_lin) # Localize these as parameters to the model
function f(du,u,p,t)
g1 = p
du[1] = -0.5 + g1(t) # Interpolations.jl interpolates via ()
end
# Define u0 and tspan
u0 = [0.]
tspan = (-1.,9.) # Note, that we would need to extrapolate beyond
ode_cst = ODEProblem(f,u0,tspan,p_cst)
ode_lin = ODEProblem(f,u0,tspan,p_lin)
# Solve and plot
sol_cst = solve(ode_cst)
sol_lin = solve(ode_lin)
# Plot
time_dense = -1.:0.1:9.
scatter(time_forcing, data_forcing, label = "discrete forcing")
plot!(time_dense, g1_cst(time_dense), label = "forcing1", line = (:dot, :red))
plot!(sol_cst, label = "solution1", line = (:solid, :red))
plot!(time_dense, g1_lin(time_dense), label = "forcing2", line = (:dot, :blue))
plot!(sol_lin, label = "solution2", line = (:solid, :blue))
Dear all
I'm looking for a numpy/scipy function to compute bicoherence and auto-bicoherence fore the studying of 3-wave interaction.
Thank you for all the possible help
nicola
The best package for this in python land is http://pypi.python.org/pypi/nitime
It has several coherence estimators, but I didn't look very carefully at those. It is a package for neuroimaging, but the algorithms only use numpy and scipy, intentionally, so it can be used by other applications.
Perhaps this Matlab toolbox will help; it's quite easy to translate Matlab into Python, generally.
Here is a function that relies on the scipy.spectrogram function (scipy version > 0.17) and compute the bicoherence between two signals.
Definition from Hagihira 2001 and Hayashi 2007. See Wikipedia-bicoherence
Hope this helps.
Regards,
def compute_bicoherence(s1, s2, rate, nperseg=1024, noverlap=512):
""" Compute the bicoherence between two signals of the same lengths s1 and s2
using the function scipy.signal.spectrogram
"""
from scipy import signal
import numpy
# compute the stft
f1, t1, spec_s1 = signal.spectrogram(s1, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex',)
f2, t2, spec_s2 = signal.spectrogram(s2, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex')
# transpose (f, t) -> (t, f)
spec_s1 = numpy.transpose(spec_s1, [1, 0])
spec_s2 = numpy.transpose(spec_s2, [1, 0])
# compute the bicoherence
arg = numpy.arange(f1.size / 2)
sumarg = arg[:, None] + arg[None, :]
num = numpy.abs(
numpy.mean(spec_s1[:, arg, None] * spec_s1[:, None, arg] * numpy.conjugate(spec_s2[:, sumarg]),
axis = 0)
) ** 2
denum = numpy.mean(
numpy.abs(spec_s1[:, arg, None] * spec_s1[:, None, arg]) ** 2, axis = 0) * numpy.mean(
numpy.abs(numpy.conjugate(spec_s2[:, sumarg])) ** 2,
axis = 0)
bicoh = num / denum
return f1[arg], bicoh
# exemple of use and display
freqs, bicoh = compute_bicoherence(s1, s2, rate)
f = plt.figure(figsize = (9, 9))
plt.pcolormesh(freqs, freqs, bicoh,
# cmap = 'inferno'
)
plt.colorbar()
plt.clim(0, 0.5)
plt.show()
If you refer to normalized cross spectral density (as defined in wikipedia) then matplotlib.mlab.cohere would do the trick.