I am trying to test a linear approximation function and I am getting the error "no method matching current_axis(::Nothing)".
Here is my linear approximation function:
function linear_approx(A,b,c,p0)
p0 = [i for i in p0]
y(p) = p'*A*p .+ b'*p .+ c .-1
e = y(p0)
d = 2*A*p0 + b
(; d, e)
end
Here is the function that attempts to plot and throws an exception. I also included that value of the parameter when I tried to call it:
pts = [(1,1), (3,2), (4,4)]
function visualize_approx(pts)
# Use this function to inspect your solution, and
# ensure that the three points lie on one of
# the level-sets of your quadratic approximation.
(; A, b, c) = constant_curvature_approx(pts)
min_val = Inf
max_val = -Inf
for pt in pts
(; d, e) = linear_approx(A,b,c,pt)
P = LinRange(pt[1] - 0.2, pt[1]+0.2, 100)
Q = linear_segment(pt, d, e, P)
# the error arises here
plot!(P, Q)
plot!([pt[1]], [pt[2]])
end
delta = max_val - min_val
min_val -= 0.25*delta
max_val += 0.25*delta
X = Y = LinRange(min_val,max_val, 100)
Z = zeros(100,100)
for i = 1:100
for j = 1:100
pt = [X[i]; Y[j]]
Z[i,j] = pt'*A*pt + pt'*b + c
end
end
contour(X,Y,Z,levels=[-1,0,1,2,3])
for pt in pts
plot!([pt[1]], [pt[2]])
end
current_figure()
end
Does anyone know why this error arises?
plot! modifies a previously created plot object. It seems like you did not create a plot before calling it. This is why you get the error. Use plot when creating the plot and plot! when modifying it.
Related
I am new to Julia and have been trying to compute some polynomials using the Nemo library's multivariate polynomials. I have the following code:
R = GF(2); # create finite field
S, (z, x) = PolynomialRing(R, ["z", "x"]); # Multivariate polynomial
# polynomial initialisations
L = x^0;
E_L = x^0;
E = x*0;
a = z;
w = [1 1 a^3 a^2 a^1 0 0];
n = length(w); #input vector, in terms of a
j = 1;
for i = 1:n
if(w[i] != 0)
L = L*(1-(a^i)*x); # equation for the locator polynomial
if(i != j)
E_L = E_L*(1-(a^j)*x);
end
j = j + 1;
E = E + w[i]*(a^i)*E_L; # LINE WITH ERROR
end
end
I am not entirely sure why the line with the expression for E throws an error.
I have tried a similar thing with the declaration for L: L = L + L*(1-(a^i)*x) to see whether it was something to do with the addition of two polynomials, however, this works fine. Therefore, I am confused as to why E's expression throws an error.
Any help would greatly appreciated! Thanks in advance!
I need to integrate the following function where there is a differentiation term inside. Unfortunately, that term is not easily differentiable.
Is this possible to do something like numerical integration to evaluate this in R?
You can assume 30,50,0.5,1,50,30 for l, tau, a, b, F and P respectively.
UPDATE: What I tried
InnerFunc4 <- function(t,x){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(x-t)}
InnerIntegral4 <- Vectorize(function(x) { integrate(InnerFunc4, 1, x, x = x)$value})
integrate(InnerIntegral4, 30, 80)$value
It shows the following error:
Error in integrate(InnerFunc4, 1, x, x = x) : non-finite function value
UPDATE2:
InnerFunc4 <- function(t,L){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(L-t)}
t_lower_bound = 0
t_upper_bound = 30
L_lower_bound = 30
L_upper_bound = 80
step_size = 0.5
integral = 0
t <- t_lower_bound + 0.5*step_size
while (t < t_upper_bound){
L = L_lower_bound + 0.5*step_size
while (L < L_upper_bound){
volume = InnerFunc4(t,L)*step_size**2
integral = integral + volume
L = L + step_size
}
t = t + step_size
}
Since It seems that your problem is only the derivative, you can get rid of it by means of partial integration:
Edit
Not applicable solution for lower integration bound 0.
I want to plot a time-evolution of 3D Gaussian with Makie.jl.
Here is a surface-version code of sin(r)/r.
So I wrote a code in reference to it.
using Makie
using FileIO
using LinearAlgebra
using AbstractPlotting
scene = Scene(backgroundcolor = :black);
f(x,y,z) = exp(-((x)^2 + (y)^2 + (z)^2))
r = LinRange(-5, 5, 50)
vol_func(t) = [Float64(f(x - cos(t),y - sin(t),z - t)) for x = r, y = r,z = r]
vol = volume!(scene,r,r,r,vol_func(20),algorithm = :mip)[end]
scene[Axis].names.textcolor = :gray
N = 20
scene
record(scene, "voloutput.mp4", range(0, stop = 5, length = N)) do t
vol[3] = vol_func(t)
end
But this code does not work.
MethodError: Cannot `convert` an object of type Array{Float64,3} to an object of type LinRange{Float64}
How should I fix the code?
P.S.
The snapshot at initial time is like this.(reference)
using Makie
using FileIO
using LinearAlgebra
using AbstractPlotting
r = LinRange(-20, 20, 500); # our value range
ρ(x, y, z) = exp(-((x-1)^2 + (y)^2 + (z)^2)) # function (charge density)
# create a Scene with the attribute `backgroundcolor = :black`,
# can be any compatible color. Useful for better contrast and not killing your eyes with a white background.
scene = Scene(backgroundcolor = :black)
volume!(
scene,
r, r, r, # coordinates to plot on
ρ, # charge density (functions as colorant)
algorithm = :mip # maximum-intensity-projection
)
scene[Axis].names.textcolor = :gray # let axis labels be seen on dark
background
save("sp.png",scene)
I want to see the yellow region moving as spiral. (2020/08/28)
I just realized not vol[3] but vol[4]. Then, it worked.
But I have a next question. (2020/08/31)
I tried to do the same thing for the matrix-form time-dependent Schrodinger equation with its initial condition being Gaussian.
using LinearAlgebra
using OrdinaryDiffEq
using DifferentialEquations
#Define the underlying equation
function time_evolution(ψdot,ψ,p,t)
ψdot.=-im.*H(Lx,Ly,Lz)*ψ
end
Lx = Ly = Lz = 10
ψ0 = [] # Initial conditions
for iz = 1:Lz
for ix = 1:Lx
for iy = 1:Ly
gauss = exp(-((ix)^2 + (iy)^2 + (iz)^2))
push!(ψ0,gauss)
end
end
end
tspan = (0.,1.0) # Simulation time span
#Pass to Solvers
prob = ODEProblem(time_evolution,ψ0,tspan)
sol = solve(prob)
Here,H(Lx,Ly,Lz) is a N×N matrix parameterized by systemsize Lx,Ly,Lz and N = Lx×Ly×Lz. The sample code of H(Lx,Ly,Lz) is here.
Then,
using Makie
using FileIO
using LinearAlgebra
using AbstractPlotting
using ColorSchemes
x = 1: Lx # our value range
y = 1: Ly
z = 1: Lz
ρ(ix,iy,iz,nt) = abs2.((sol[nt][(iz-1)*Lx*Ly + (ix-1)*Ly + (iy-1)])./norm(sol[nt][(iz-1)*Lx*Ly + (ix-1)*Ly + (iy-1)]))
ψ(nt) = Float64[ρ(ix,iy,iz,nt) for ix in x, iy in y,iz in z]
scene = Scene(backgroundcolor = :white)
c = ψ(length(sol.t))
vol = volume!(
scene,
x, y, z, # coordinates to plot on
c, # charge density (functions as colorant)
algorithm = :mip, # maximum-intensity-projection
colorrange = (0,0.01),
transparency = true,
)[end]
update_cam!(scene, Vec3f0(1,0.5,0.1), Vec3f0(0))
scene[Axis].names.textcolor = :gray # let axis labels be seen on darkbackground
record(scene, "output.mp4", range(0, stop = length(sol.t)-1, length = 1)) do nt
vol[4] = ψ(nt)
end
But this code has an error.
ArgumentError: range(0.0, stop=5.0, length=1): endpoints differ
Where is the mistake?
I found the mistake.(2020/09/02)
sol[nt]→sol(nt)
range(0, stop = length(sol.t)-1, length = 1)→range(0, stop = 1.0, length = 20)
Then, the code passed and a mp4 animation was obtained.
But the plot can't be seen in the mp4 file. Why...
I am trying to Implement gradient Descent algorithm from scratch to find the slope and intercept value for my linear fit line.
Using the package and calculating slope and intercept, I get slope = 0.04 and intercept = 7.2 but when I use my gradient descent algorithm for the same problem, I get slope and intercept both values = (-infinity,-infinity)
Here is my code
x= [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
y=[2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function GradientDescent()
m=0
c=0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp #error in predicted value
dm = 2*E*(-x[k]) # partial derivation of cost function w.r.t slope(m)
dc = 2*E*(-1) # partial derivate of cost function w.r.t. Intercept(c)
m = m + (dm * 0.001)
c = c + (dc * 0.001)
end
end
return m,c
end
Values = GradientDescent() # after running values = (-inf,-inf)
I have not done the math, but instead wrote the tests. It seems you got a sign error when assigning m and c.
Also, writing the tests really helps, and Julia makes it simple :)
function GradientDescent(x, y)
m=0.0
c=0.0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp
dm = 2*E*(-x[k])
dc = 2*E*(-1)
m = m - (dm * 0.001)
c = c - (dc * 0.001)
end
end
return m,c
end
using Base.Test
#testset "gradient descent" begin
#testset "slope $slope" for slope in [0, 1, 2]
#testset "intercept for $intercept" for intercept in [0, 1, 2]
x = 1:20
y = broadcast(x -> slope * x + intercept, x)
computed_slope, computed_intercept = GradientDescent(x, y)
#test slope ≈ computed_slope atol=1e-8
#test intercept ≈ computed_intercept atol=1e-8
end
end
end
I can't get your exact numbers, but this is close. Perhaps it helps?
# 141 ?
datax = [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
datay = [2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function gradientdescent()
m = 0
b = 0
learning_rate = 0.00001
for n in 1:10000
for i in 1:length(datay)
x = datax[i]
y = datay[i]
guess = m * x + b
error = y - guess
dm = 2error * x
dc = 2error
m += dm * learning_rate
b += dc * learning_rate
end
end
return m, b
end
gradientdescent()
(-0.04, 17.35)
It seems that adjusting the learning rate is critical...
I'm trying to vectorize an inequality constraint comparing two Convex types. On one side, I have Convex.MaxAtoms, and on the other side, I have Variables. I want to do something like the following:
using Convex
N = 10
t = Variable(1)
v = Variable(N)
x = Variable(1)
z = rand(100)
problem = minimize(x)
problem.constraints += [t >= 0]
ccc = Vector{Convex.MaxAtom}(N)
for i = 1:N
c = -(1. + minimum(x.*z))
cc = t + c
ccc[i] = max(cc,0.)
end
problem.constraints += [ccc <= v]
but I'm getting the following error on the final constraint:
ERROR: LoadError: MethodError: no method matching isless(::Complex{Int64}, ::Int64)
I'm not sure where the Int64 types are coming in. Is there a better way of adding this constraint besides looping through and adding individual comparisons like
for i = 1:N
problem.constraints += [ccc[i] <= v[i]]
end
I'm trying to avoid this because eventually my 10 will be much larger.
In this case (thanks to Dr. Udell), it works to vectorize as
c = -(1. + xisim + minimum(x.*z))
cc = t + c
ccc = max(cc,0.)
problem.constraints += [ccc <= v]