Related
My julia version is 1.7.1.
These code can recurrent my problem:
struct Foo1
sixsix
aa
bb
disposion
end
struct Foo2
aa
bb
end
function Base.:+(f1::Foo1, f2 :: Foo2)
newf = f1
for n in fieldnames(typeof(f2))
getproperty(newf, n) += getproperty(f2, n)
end
return newf
end
returned LoadError: syntax: invalid assignment location "getproperty(newf, n)"
same LoadError happened when I try to use getfield:
function Base.:+(f1::Foo1, f2 :: Foo2)
newf = f1
for n in fieldnames(typeof(f2))
getfield(newf, n) += getfield(f2, n)
end
return newf
end
Firstly, you must make your structs mutable for this to work.
Secondly, this:
getproperty(newf, n) += getproperty(f2, n)
is expanded into
getproperty(newf, n) = getproperty(newf, n) + getproperty(f2, n)
In other words you are apparently trying to assign a value into a function call. In Julia you can only assign to variables, not to values. The only thing this syntax is allowed for is function definition. From the manual:
There is a second, more terse syntax for defining a function in Julia.
The traditional function declaration syntax demonstrated above is
equivalent to the following compact "assignment form":
julia> f(x,y) = x + y
f (generic function with 1 method)
So the syntax you are using doesn't do what you want, and what you want to do is not possible anyway, since you can only assign to variables, not to values.
What you are trying to do can be achieved like this (assuming n is a Symbol):
setfield!(newf, n, getfield(newf, n) + getfield(f2, n))
I would rather recommend you to do the following:
Base.:+(f1::Foo1, f2 :: Foo2) =
Foo1(f1.sixsix, f1.aa+f2.aa, f1.bb+f2.bb, f1.disposion)
Your code is wrong for the following reasons:
Foo1 is not mutable, so you cannot change values of its elements
to set a field of mutable struct (which your struct is not) you use setfield!
You mix properties and fields of a struct which are not the same.
fieldnames works on types not on instances of type.
If you wanted to be more fancy and do automatic detection of matching fields you could do:
Base.:+(f1::Foo1, f2 :: Foo2) =
Foo1((n in fieldnames(Foo2) ?
getfield(f1, n) + getfield(f2, n) :
getfield(f1, n) for n in fieldnames(Foo1))...)
However, I would not recommend it, as I feel that it is overly complicated.
I found a workaround to make composite function, but I believe there should be a better way to do this:
? f = x^2
%1 = x^2
? g = x^3
%2 = x^3
? x = g
%3 = x^3
? fog = eval(f)
%4 = x^6
? x = 2
%5 = 2
? result = eval(fog)
%6 = 64
In this method, I need to assign x many times and I don't want to use eval function. The code is not readable and maintainable.
You can simplify Piotr's nice answer to
comp(f, g) = x->f(g(x));
Indeed, you do not need to assign to the (global) variable h in the comp function itself. Also, the braces are not necessary for a single-line statement, and neither are type annotations (which are meant to optimize the byte compiler output or help gp2c; in this specific case they do not help).
Finally the parentheses around the argument list are optional in the closure definition when there is a single argument, as (x) here.
I would modify the examples as follows
f(x) = x^2;
g(x) = x^3;
h = comp(f, g);
? h('x) \\ note the backquote
%1 = x^6
? h(2)
%2 = 64
The backquote in 'x makes sure we use the formal variable x and not whatever value was assigned to the GP variable with that name. For the second example, there is no need to assign the value 2 to x, we can call h(2) directly
P.S. The confusion between formal variables and GP variables is unfortunate but very common for short names such as x or y. The quote operator was introduced to avoid having to kill variables. In more complicated functions, it can be cumbersome to systematically type 'x instead of x. The idiomatic construct to avoid this is my(x = 'x). This makes sure that the x GP variable really refers to the formal variable in the current scope.
PARI/GP supports the anonymous closures. So you can define the function composition on your own like this:
comp(f: t_FUNC, g: t_FUNC) = {
h = (x) -> f(g(x))
};
Then your code can be transformed to a more readable form:
f(x) = x^2;
g(x) = x^3;
h = comp(f, g);
h(x)
? x^6
x = 2; h(x)
? 64
Hope, this helps.
How do I evaluate the function in only one of its variables, that is, I hope to obtain another function after evaluating the function. I have the following piece of code.
deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
fun (4, y)
I hope to get 16-3y ^ 2 + 4y ^ 3
If what you want to do is to write x = f(4,y), and later just do x(2) to get -36, that is called partial application:
Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments".
This is a very useful feature, and very common Functional Programming Languages, such as Haskell, but even JS and Python now are able to do it. It is also possible to do this in MATLAB and GNU/Octave using anonymous functions (see this answer). In Scilab, however, this feature is not available.
Workround
Nonetheless, Scilab itself uses a workarounds to carry a function with its arguments without fully evaluating. You see this being used in ode(), fsolve(), optim(), and others:
Create a list containing the function and the arguments to partial evaluation: list(f,arg1,arg2,...,argn)
Use another function to evaluate such list and the last argument: evalPartList(list(...),last_arg)
The implementation of evalPartList() can be something like this:
function y = evalPartList(fList,last_arg)
//fList: list in which the first element is a function
//last_arg: last argument to be applied to the function
func = fList(1); //extract function from the list
y = func(fList(2:$),last_arg); //each element of the list, from second
//to last, becomes an argument
endfunction
You can test it on Scilab's console:
--> deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
--> x = list(fun,4)
x =
x(1)
[F]= x(1)(x,y)
x(2)
4.
--> evalPartList(x,2)
ans =
36.
This is a very simple implementation for evalPartList(), and you have to be careful not to exceed or be short on the number of arguments.
In the way you're asking, you can't.
What you're looking is called symbolic (or formal) computational mathematics, because you don't pass actual numerical values to functions.
Scilab is numerical software so it can't do such thing. But there is a toolbox scimax (installation guide) that rely on a the free formal software wxmaxima.
BUT
An ugly, stupid but still sort of working solution is to takes advantages of strings :
function F = fun (x, y) // Here we define a function that may return a constant or string depending on the input
fmt = '%10.3E'
if (type(x)==type('')) & (type(y)==type(0)) // x is string is
ys = msprintf(fmt,y)
F = x+'^2 - 3*'+ys+'^2 + '+x+'*'+ys+'^3'
end
if (type(y)==type('')) & (type(x)==type(0)) // y is string so is F
xs = msprintf(fmt,x)
F = xs+'^2 - 3*'+y+'^2 + '+xs+'*'+y+'^3'
end
if (type(y)==type('')) & (type(x)==type('')) // x&y are strings so is F
F = x+'^2 - 3*'+y+'^2 + '+x+'*'+y+'^3'
end
if (type(y)==type(0)) & (type(x)==type(0)) // x&y are constant so is F
F = x^2 - 3*y^2 + x*y^3
end
endfunction
// Then we can use this 'symbolic' function
deff('F2 = fun2(y)',' F2 = '+fun(4,'y'))
F2=fun2(2) // does compute fun(4,2)
disp(F2)
I use code like this:
p = _belineInterpolateGrid( map( p -> sin(norm(p)), grid ), grid )
f = open("/data/test.function", "w")
serialize( f, p )
close(f)
p0 = deserialize( open("/data/test.function", "r") )
where _belineInterpolateGrid is
function _belineInterpolateGrid(PP, Grid)
...
P = Array(Function, N-1, M-1);
...
poly = (x,y) -> begin
i_x, i_y = i(x, y);
return P[i_x, i_y](x, y);
end
return poly
And now, since some of v0.4, a have an error:
ERROR: MethodError: `convert` has no method matching
convert(::Type{LambdaStaticData}, ::Array{Any,1})
This may have arisen from a call to the constructor LambdaStaticData(...),
since type constructors fall back to convert methods.
Closest candidates are:
call{T}(::Type{T}, ::Any)
convert{T}(::Type{T}, ::T)
...
in deserialize at serialize.jl:435
Why It's happend? Is it bug and how to fix it?
This looks like a bug in Julia to me, and it looks like it has been fixed as of v0.4.6. Try upgrading to that version or newer and see if the problem persists.
You're returning a lambda, that's why. Can't tell if it's a bug (you can serialize a lambda but you can't deserialize it?).
You can avoid this by defining your "get interpolation at x,y" as a type:
import Base: getindex
type MyPoly
thepoly
end
function getindex(p::MyPoly, x::Int, y::Int)
p.thepoly[x+5*y]
end
function getindex(p::MyPoly, I...)
p.thepoly[I...]
end
function call(p::MyPoly, v)
#collect helps keep eltype(ans) == Int
powered_v = map( i->v^i, collect(
take(countfrom(),size(p.thepoly,1))))
powered_v.*p.thepoly
end
p=MyPoly(collect(1:10))
println(p[1])
f = open("serializedpoly", "w")
serialize( f, p)
close(f)
p0 = deserialize( open("serializedpoly", "r"))
println(p[1,1])
v=call(p, 4) #evaluate poly on 4
EDIT: added extension for call
I wanted to use user-defined kernel function for Ksvm in R.
so, I tried to make a vanilladot kernel and compare with "vanilladot" which is built in "kernlab" as practice.
I write my kernel as follow.
#
###vanilla kernel with class "kernel"
#
kfunction.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "kernel"
k}
l<-0.1 ; C<-1/(2*l)
###use kfunction.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.k
I thouhgt result of this operation is eqaul to that of following.
However It dosn't.
#
###use "vanilladot"
#
l<-0.1 ; C<-1/(2*l)
tmp1<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel="vanilladot", C = C)
alpha(tmp1)[[1]]
ind1<-alphaindex(tmp1)[[1]]
x.s<-x[ind1,] ; y.s<-y[ind1]
w.tmp1<-t(alpha(tmp1)[[1]]*y.s)%*%x.s
w.tmp1
I think maybe this problem is related to kernel class.
When class is set to "kernel", this problem is occured.
However When class is set to "vanillakernel", the result of ksvm using user-defined kernel is equal to that of ksvm using "vanilladot" which is built in Kernlab.
#
###vanilla kernel with class "vanillakernel"
#
kfunction.v.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "vanillakernel"
k}
# The only difference between kfunction.k and kfunction.v.k is "class(k)".
l<-0.1 ; C<-1/(2*l)
###use kfunction.v.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.v.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.v.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.v.k
I don't understand why the result is different from "vanilladot", when setting the class to "kernel".
Is there an error in my operation?
First, it seems like a really good question!
Now to the point. In the sources of ksvm we can find when is a line drawn between using user-defined kernel, and the built-ins:
if (type(ret) == "spoc-svc") {
if (!is.null(class.weights))
weightedC <- class.weights[weightlabels] * rep(C,
nclass(ret))
else weightedC <- rep(C, nclass(ret))
yd <- sort(y, method = "quick", index.return = TRUE)
xd <- matrix(x[yd$ix, ], nrow = dim(x)[1])
count <- 0
if (ktype == 4)
K <- kernelMatrix(kernel, x)
resv <- .Call("tron_optim", as.double(t(xd)), as.integer(nrow(xd)),
as.integer(ncol(xd)), as.double(rep(yd$x - 1,
2)), as.double(K), as.integer(if (sparse) xd#ia else 0),
as.integer(if (sparse) xd#ja else 0), as.integer(sparse),
as.integer(nclass(ret)), as.integer(count), as.integer(ktype),
as.integer(7), as.double(C), as.double(epsilon),
as.double(sigma), as.integer(degree), as.double(offset),
as.double(C), as.double(2), as.integer(0), as.double(0),
as.integer(0), as.double(weightedC), as.double(cache),
as.double(tol), as.integer(10), as.integer(shrinking),
PACKAGE = "kernlab")
reind <- sort(yd$ix, method = "quick", index.return = TRUE)$ix
alpha(ret) <- t(matrix(resv[-(nclass(ret) * nrow(xd) +
1)], nclass(ret)))[reind, , drop = FALSE]
coef(ret) <- lapply(1:nclass(ret), function(x) alpha(ret)[,
x][alpha(ret)[, x] != 0])
names(coef(ret)) <- lev(ret)
alphaindex(ret) <- lapply(sort(unique(y)), function(x)
which(alpha(ret)[,
x] != 0))
xmatrix(ret) <- x
obj(ret) <- resv[(nclass(ret) * nrow(xd) + 1)]
names(alphaindex(ret)) <- lev(ret)
svindex <- which(rowSums(alpha(ret) != 0) != 0)
b(ret) <- 0
param(ret)$C <- C
}
The important parts are two things, first, if we provide ksvm with our own kernel, then ktype=4 (while for vanillakernel, ktype=0) so it makes two changes:
in case of user-defined kernel, the kernel matrix is computed instead of actually using the kernel
tron_optim routine is ran with the information regarding the kernel
Now, in the svm.cpp we can find the tron routines, and in the tron_run (called from tron_optim), that LINEAR kernel has a separate optimization routine
if (param->kernel_type == LINEAR)
{
/* lots of code here */
while (Cpj < Cp)
{
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w,
Cpj, Cnj, param->eps, sii, param->shrinking,
param->qpsize);
/* lots of code here */
}
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
delete[] w;
}
else
{
Solver_B s;
s.Solve(l, BSVC_Q(*prob,*param,y), minus_ones, y, alpha, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
}
As you can see, the linear case is treated in the more complex, more detailed way. There is an inner optimization loop calling the solver many times. It would require really deep analysis of actual optimization being performed here, but at this step one can answer your question in a following way:
There is no error in your operation
kernlab's svm has a separate routine for training SVM with linear kernel, which is based on the type of kernel passed to the code, changing "kernel" to "vanillakernel" made the ksvm think it is actually working with vanillakernel, and so performed this separate optimization routine
It does not seem as a bug in fact, as the linear SVM is in fact very different from the kernelized version in terms of efficient optimization techniques. Amount of heuristic as well as numerical issues that has to be taken care of is really big. As a result, some approximations are required and can lead to the different results. While for the rich feature space (like those induced by RBF kernel) it should not really matter, for simple kernels line linear ones - this simplifications can lead to significant output changes.