in chainer, how to use chainer.GradientMethod and how to customize define parameter update rule - chainer

I found a class in https://docs.chainer.org/en/stable/reference/core/generated/chainer.GradientMethod.html?highlight=gradient, it has a function called create_update_rule(), my needs is I define an Function which backward gradient, suppose I want write following code:
W[i] -= gradient[i] * learning_rate;
where W is parameter of my Function/Layer, But I don't know chainer default optimizer how to update parameter? is it += or -= ?

Each optimizer, for example SGD optimizer, is the sub class of GradientMethod.
And each optimizer have own UpdateRule.
See SGD's update rule which calculates
W[i] -= gradient[i] * learning_rate

Related

How to define persistent matrix variable inside the Scilab function?

I have been developing a Scilab function where I need to have persistent variable of the matrix type. Based on my similar question I have chosen the same approach. Below is the code I have used for test of this approach.
function [u] = FuncXYZ(x)
global A;
global init;
if init == 0 then
init = 1;
A = eye(4, 4);
endif
u = A(1, 1);
endfunction
As soon as I have integrated the function inside my Xcos simulation I have been surprised that I see "0" at the output of the scifunc_block_m.
Nevertheless I have found that in case I use below given command for "return" from the function
u = A(3, 3);
the function returns really the expected "1". Additionaly if I take a look at the Variable Browser on the top right corner of the Scilab window I can't se the expected A 4x4 item. It seems that I am doing something wrong.
Can anybody give me an advice how to define a persistent variable of the matrix type inside the Scilab function?
Thanks in advance for any ideas.
Global variables are by default initialized with an empty matrix. Hence, you should detect first call with isempty()
function [u] = FuncXYZ(x)
global A;
global init;
if isempty(init)
init = 1;
A = eye(4, 4);
end
u = A(1, 1);
endfunction
BTW, your code is incorrect, there is no endif in Scilab.

Switching ODE functions in Julia

Fom document of DifferentialEquations package, switching between sets of ODE functions can be done using a parameter as
function f(du,u,p,t)
if p==0
du[1] = 2u[1]
else
du[1] = - u[1]
end
du[2] = -u[2]
end
Is this possible to use dependent variable (state variable) instead of parameter p as the switch like
function f(du,u,p,t)
if (u[2]<=0 && du[2]>0)
du[1] = 2u[1]
else
du[1] = - u[1]
end
du[2] = -u[2]
end
Thank you in advance for your help.
Is this possible to use dependent variable (state variable) instead of parameter p as the switch like
Yes. It introduces a discontinuity, so it's not the best thing to do, but adaptivity will handle it. Sometimes performance can be improved by making a ContinuousCallback which rootfinds to that value as the condition, but then does nothing for the affect!. But yes, the code with the branch in it is fine.

Saving multiple PDF-files in Julia

I'm trying to write a script that draws Fourier sums of at least certain nicely behaving functions using Riemann sums. Ignoring the fact that my math might be seriously off at the moment, I can't seem to be able to print multiple PDF-files during the same script, which goes as follows:
"""
RepeatFunction(domain::Array{Real,1},fvals::Array{Complex{Real}},
repeatN::Int64,period::Float64=2*pi)
Builds a periodic function centered at origin in both directions from a set of
given function values, but more imporrtantly, also stretches out the domain to
accommodate this new extended array.
"""
function RepeatFunction(domain::Array{Float64,1},fvals::Array{Float64,1},
N::Int64,period::Float64=2*pi)
# Extending the domain
for n in 1:N/2
domain = [
linspace(domain[1]-period,domain[2],length(fvals));
domain;
linspace(domain[end],domain[end-1]+period,length(fvals))];
end
# Repeating the function
if N % 2 == 0
fvals = repeat(fvals,outer=[N+1]);
else
fvals = repeat(fvals, outer=[N]);
end
return domain, fvals
end
"""
RiemannSum(domain::Array{Float64},fvals::Array{Float64})::Float64
Calculates the discrete Riemann sum of a real valued function on a given
equipartitioned real domain.
"""
function RiemannSum(domain::Array{Complex{Real},1},fvals::Array{Complex{Real},1})
try
L = domain[end] - domain[1];
n = length(fvals);
return sum(fvals * L / n);
catch
println("You most likely attempted to divide by zero.
Check the size of your domain.")
return NaN
end
end
"""
RiemannSum(domain::StepRange{Real,Real},fvals::StepRange{Real,Real})::Float64
Calculates the discrete Riemann sum of a function on a given
equipartitioned domain.
"""
function RiemannSum(domain,fvals)
try
L = domain[end] - domain[1];
n = length(fvals);
return sum(fvals * L / n);
catch
println("You most likely attempted to divide by zero.
Check the size of your domain.")
return NaN
end
end
"""
RiemannSum(domain::StepRange{Real,Real},fvals::StepRange{Real,Real})::Float64
Calculates the discrete Riemann sum of a function on a given
equipartitioned domain.
"""
function RiemannSum(domain::StepRangeLen{Real,Base.TwicePrecision{Real},Base.TwicePrecision{Real}},
fvals::StepRangeLen{Real,Base.TwicePrecision{Real},Base.TwicePrecision{Real}})
try
L = domain[end] - domain[1];
n = length(fvals);
return sum(fvals * L / n);
catch
println("You most likely attempted to divide by zero.
Check the size of your domain.")
return NaN
end
end
"""
RiemannSum(domain::StepRange{Real,Real},fvals::StepRange{Real,Real})::Float64
Calculates the discrete Riemann sum of a function on a given
equipartitioned domain.
"""
function RiemannSum(domain,fvals)
try
L = domain[end] - domain[1];
n = length(fvals);
return sum(fvals * L / n);
catch
println("You most likely attempted to divide by zero.
Check the size of your domain.")
return NaN
end
end
"""
FourierCoefficient(domain,fvals)
Calculates an approximation to the Fourier coefficient for a function with
a period equal to the domain length.
"""
function FourierCoefficient(domain,fvals,n::Int64,T::Float64)
return 1/ T * RiemannSum(domain,fvals*exp(-1im * n * 1/T));
end
"""
FourierSum(domain.fvals)
Calculates the Fourier sum of a function on a given domain.
"""
function FourierSum(domain,fvals, N::Int64,T::Float64)
return [sum(FourierCoefficient(domain,fvals,n,T)*exp(1im * n * 1/T)) for n in -N:N];
end
using Plots;
pyplot()
n = 10;
T = 2*pi;
x = collect(linspace(-pi,pi,2n+1));
f = x.^2 + x;
funplot = plot(x,f);
#display(funfig)
savefig("./fun.pdf")
#println("Φ =",real(Φ))
x,repf = RepeatFunction(x,f,6,T)
repfunplot = plot(x,repf,reuse=false);
#display(repfunfig)
savefig("./repfun.pdf")
The trick mentioned here has no effect on the outcome, which is that only the first PDF gets printed. Any Julia gurus here that know what is causing the issue? I get no error messages whatsoever.
I ran the code on my system (Julia Version 0.6.1) without errors. Got 2 pdfs in my Folder: fun.pdf and repfun.pdf containing this:
Alright, I've figured out what the problem is or was. At least when coding in Juno/Atom, when Julia is first started it assumes that the working directory is /home/<username>, meaning if you issue a savefig()-command, the images are printed there.
If you wish the images to appear at a specific location, either give the absolute path to savefig, or put the line
cd("/path/to/desired/location")
at the beginning of your file.
How the first image somehow managed to appear in the desired folder still remains a mystery to me. I might have tried running Julia through bash, when working in said directory, but this must have been before I managed to get even the most rudimentary of figures to appear without issues.
Tried replicating your code, but the error is:
ERROR: LoadError: UndefVarError: StepRangeLen not defined
in include at boot.jl:261
in include_from_node1 at loading.jl:320
in process_options at client.jl:280
in _start at client.jl:378
while loading C:\Users\Евгений\Desktop\1.jl, in expression starting on line 433
There is not even line 433 in this code - very confused.

Add my custom loss function to torch

I want to add a loss function to torch that calculates the edit distance between predicted and target values.
Is there an easy way to implement this idea?
Or do I have to write my own class with backward and forward functions?
If your criterion can be represented as a composition of existing modules and criteria, it's a good idea to simply construct such composition using containers. The only problem is that standard containers are designed to work with modules only, not criteria. The difference is in :forward method signature:
module:forward(input)
criterion:forward(input, target)
Luckily, we are free to define our own container which is able work with criteria too. For example, sequential:
local GeneralizedSequential, _ = torch.class('nn.GeneralizedSequential', 'nn.Sequential')
function GeneralizedSequential:forward(input, target)
return self:updateOutput(input, target)
end
function GeneralizedSequential:updateOutput(input, target)
local currentOutput = input
for i=1,#self.modules do
currentOutput = self.modules[i]:updateOutput(currentOutput, target)
end
self.output = currentOutput
return currentOutput
end
Below is an illustration of how to implement nn.CrossEntropyCriterion having this generalized sequential container:
function MyCrossEntropyCriterion(weights)
criterion = nn.GeneralizedSequential()
criterion:add(nn.LogSoftMax())
criterion:add(nn.ClassNLLCriterion(weights))
return criterion
end
Check whether everything is correct:
output = torch.rand(3,3)
target = torch.Tensor({1, 2, 3})
mycrit = MyCrossEntropyCriterion()
-- print(mycrit)
print(mycrit:forward(output, target))
print(mycrit:backward(output, target))
crit = nn.CrossEntropyCriterion()
-- print(crit)
print(crit:forward(output, target))
print(crit:backward(output, target))
Just to add to the accepted answer, you have to be careful that the loss function you define (edit distance in your case) is differentiable with respect to the network parameters.

Systemverilog random bit vector

i'm using system-verilog and I want to randomize a bit vector with size of 100.
But i want that only 10 cells will get value of 1.
I tried to use countones() in constraint but its not possible.
so i'm out of ides.
Thanks for any help!
I tried this code out and it works in Incisve:
package some_package;
class some_class;
rand bit[99:0] vec;
constraint just_10_ones {
$countones(vec) == 10;
}
endclass
endpackage
module top;
import some_package::*;
initial begin
static some_class obj = new();
obj.randomize();
$display("vec = %b", obj.vec);
end
endmodule
From what I remember, some vendors didn't support such constraints in the past, where a random variable is used as an input to a method. If they did support it, the value of the variable at the time of starting the randomize() was used for the input but this constraint did not affect its end value.

Resources