Replicating a Julia example - julia

I am trying to learn Julia and I read this article about the quick success of Julia. In the last page of the article the author works a small example showing the benefits of multiple dispatch. They define a custom class Spect and define a plot() function for it. Then for an object sqw of type Spect they can call plot(sqw) without having to edit the original plot function. Moreover, this definition also affects similar plotting functions so that you can also call scatter(sqw) without problems. My issue is that author does not show the code, so I do not understand how can you achieve this. I am specially interested in the fact that just defining plot() for this new class is enough to also call other functions like scatter() without defining them for the new class.
Can someone write a small example of this like that of the article so that I can understand how all of this is achieved? Thank you in advance.

Cross posting my answer from Discourse:
It’s a shame the article doesn’t link to the code. Here’s my rough reproduction attempt. My version uses the dct and idct so I’m not getting the nice harmonics, but I think it shows the ideas pretty well.
using RecipesBase, FFTW
struct Spect
points :: AbstractRange
weights :: Vector{Float64}
end
function Spect(f::Function, min, max, n)
points = range(min, max, n)
Spect(points, dct(f.(points)))
end
#recipe function f(S::Spect)
S.points, idct(S.weights)
end
These definitions are enough for
using Plots
squarewave(x) = iseven(floor(x)) ? 1.0 : 0.0
sqw = Spect(squarewave, 0, 5, 20);
plot(sqw)
scatter(sqw)
and

Related

Heaviside theta integration with Mathematica

I am trying to integrate a Heaviside theta function with two signs inside and Mathematica won't give me a solution. Is there any way of improving the approach before just acknowledging that Mathematica cannot integrate it?
Some things you do really worry me.
You use X as a variable and as the name of a function. I've changed those to X and Xf.
You use ω as a variable and as the name of a function. I've changed those to ω and ωf.
You use = and not := in your function definitions. I've changed that.
With those changes I have
Xf[s1_,s2_,α_]:=(1+s2(-2+α)-α+s1(-1+2s2+α))/((-1+2 s1)(-1+2s2));
ωf[s1_,s2_,α_]:=((1-s2+s1(-3+4s2))(1+s2(-2+α)-α+s1(-1+2s2+α)))/((-1+2s1)(-1+s1+s2)(-1+2s2));
λ1[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω-
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ2[s1_,s2_,α_,X_,ω_]:=1/2(s1-2s2-3s1 X+3s2 X-α+s1 α+s2 α+ω-s1 ω-s2 ω+
1/2Sqrt[(4(α-ω+s2(2-3X-α-ω)+s1(-1+3X-α+ω))^2+8(2-5X+4X^2-2α+2X α+
s1(-2+4s2(1-2X)^2+7X-8X^2+2α-2X α-ω)+ω-s2(4-11X+8X^2-2α+2X α+ω)))]);
λ1simp[s1_,s2_,α_]:=λ1[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
λ2simp[s1_,s2_,α_]:=λ2[s1,s2,α,Xf[s1,s2,α],ωf[s1,s2,α]];
fint[s1_,s2_]:=HeavisideTheta[Sign[-λ1simp[s1,s2,α]]*Sign[-λ2simp[s1,s2,α]]];
Please check that very carefully to see if I've made any mistakes.
Now I want to look at your integrand and see what Mathematica sees.
Simplify[fint[s1,s2],1/2<=s1<=1&&1/2<=s2<=s1]
And it responds with
HeavisideTheta[CompexInfinity] 2s1==1||2s2==1||s1+s2==1
HeavisideTheta[True Sign[...]*Sign[...]]
so it looks like your integrand is blowing up at the boundary.
I check that with
Simplify[fint[1/2,s2]]
or
Simplify[fint[s1,1/2]]
and it responds with 1/0 and Indeterminate and HeavisideTheta[Indeterminate]
When it isn't at the boundary, for example
Simplify[fint[3/4,3/4]]
it returns
HeavisideTheta[Sign[5-4*Sqrt[7-28*α+25*α^2]]*Sign[5+4*Sqrt[7-28*α+25*α^2]]]
and that probably says that α is a free variable and we aren't able to determine the value of the Sign without more information.
So I think this is a strong hint where I would begin looking for why your integration is not simply completing.
If you are curious what that integrand looks like then try
α=1/4;
Plot3D[fint[s1,s2],{s1,1/2,1},{s2,1/2,s1}]

Julia not returning the same value when executing a function or every of its lines

I am a very new Julia user (coming from Matlab), so forgive me if I ask a very dumb question.
I currently have a julia code, which works (it runs fine) though it provides different results if I execute it as a function or if I run every of the function lines interactively.
My script is mostly about linear algebra and uses Arrays and Dicts.
As I have some trouble making use of the Juno debugger, I did not find another way to debug my code, which is quite a shame.
I spent the last three hours on this and I still have no clue why these results differ.
I suspect I don't understand some very basic working process of julia related to variable allocation but I'm flying blind here.
Does anyone have a explaination for this behavior ?
I can't provide the code here but here is the base structure of the code. Basically the M matrix returned by childfunction is wrong. a is a scalar a dict is a dictionary.
calling function
function motherfunction(...)
M = childfunction(a,dict)
end
child function
function childfunction(...)
...
M = *some linear algebra*
return M
end

Controlling the 'levels' of differentiation in SageMath 9.1

Sage seems to want to evaluate derivatives as far as possible using the chain rule. A simple example is:
var('theta')
f = function('f')(theta)
g = function('g')(theta)
h = f*g
diff(h,theta)
which would display
g(theta)*diff(f(theta), theta) + f(theta)*diff(g(theta), theta)
My question is, is there a way to control just how far Sage will take derivatives? In the example above for instance, how would I get Sage to display instead:
diff(f(theta)*g(theta))
I'm working through some pretty intensive derivations in fluid mechanics, and being able to not evaluate derivatives all the way like discussed above would really help with this. Thanks in advance. Would appreciate any help on this.
This would be called "holding" the derivative.
Adding this possibility to Sage has already been considered.
Progress on this is tracked at:
Sage Trac ticket 24861
and the ticket even links to a branch with code implementing this.
Although progress on this is stalled, and the branch has not been merged,
you could use the code from the branch.

Taking advantage of Julia's integration abilities

One of the main reasons I wanted to use Julia for my project is because of its speed, especially for calculating integrals.
I would like to integrate a 1-d function f(x) over some interval [a,b]. In general Julia's quadgk function would be a fast and accurate solution. However, I do not have the function f(x), but only its values f(xi) for a discrete set of points xi in [a,b], stored in an array. The xi's are regularly spaced, and I can get the spacing to be however small I like.
Naively, I could simply define a function f which interpolates using the values f(xi) and feed this to quadgk, (and make the spacing as small as possible), however then I won't know what my error is, which is a shame because QuadGK tells you the error in its estimation.
Another solution is to write a function myself to integrate the array (with trapezoid rule for example), but that would defeat the purpose of using Julia...
What is the easiest way to accurately integrate a function only given discrete values using Julia?
Since you only have values, not the function itself, trapezoid will be your best bet probably. The package Trapz provides this (https://github.com/francescoalemanno/Trapz.jl). However, I think it is worth seeing how easy writing a pretty good implementation yourself would be.
function trap(A)
return sum(A) - (A[begin] + A[end])/2
end
This takes 2.9ms for an array of 10 million floats. If they're Int, then 2.9ms. If they were complex numbers, it would still work (and take 8.9 ms)
A method like this is a good example to show how simple it can be to write pretty fast code in Julia that is still fully generic

How to speed up writing to a matrix in a reference class in R

Here is a piece of R code that writes to each element of a matrix in a reference class. It runs incredibly slowly, and I’m wondering if I’ve missed a simple trick that will speed this up.
nx = 2000
ny = 10
ref_matrix <- setRefClass(
"ref_matrix",fields = list(data = "matrix"),
)
out <- ref_matrix(data = matrix(0.0,nx,ny))
#tracemem(out$data)
for (iy in 1:ny) {
for (ix in 1:nx) {
out$data[ix,iy] <- ix + iy
}
}
It seems that each write to an element of the matrix triggers a check that involves a copy of the entire matrix. (Uncommenting the tracemen() call shows this.) Now, I’ve found a discussion that seems to confirm this:
https://r-devel.r-project.narkive.com/8KtYICjV/rd-copy-on-assignment-to-large-field-of-reference-class
and this also seems to be covered by Speeding up field access in R reference classes
but in both of these this behaviour can be bypassed by not declaring a class for the field, and this works for the example in the first link which uses a 1D vector, b, which can just be set as b <<- 1:10000. But I’ve not found an equivalent way of creating a 2D array without using a explicit “matrix” instance.
Am I just missing something simple, or is this actually not possible?
Let me add a couple of things. First, I’m very new to R, so could easily have missed something. Second, I’m really just curious about the way reference classes work in this case and whether there’s a simple way to use them efficiently; I’m not looking for a really fast way to set the elements of a matrix - I can do that by not having the matrix in a reference class at all, and if I really care about speed I can write a C routine to do it and call it from R.
Here’s some background that might explain why I’m interested in this, which you’re welcome to ignore.
I got here by wanting to see how different languages, and even different compiler options and different ways of coding the same operation, compared for efficiency when accessing 2D rectangular arrays. I’ve been playing with a test program that creates two 2D arrays of the same size, and calls a subroutine that sets the first to the elements of the second plus their index values. (Almost any operation would do, but this one isn’t completely trivial to optimise.) I have this in a number of languages now, C, C++, Julia, Tcl, Fortran, Swift, etc., even hand-coded assembler (spoiler alert: assembler isn’t worth the effort any more) and thought I’d try R. The obvious implementation in R passes the two arrays to a subroutine that does the work, but because R doesn’t normally pass by reference, that routine has to make a copy of the modified array and return that as the function value. I thought using a reference class would avoid the relatively minor overhead of that copy, so I tried that and was surprised to discover that, far from speeding things up, it slowed them down enormously.
Use outer:
out$data <- outer(1:ny, 1:nx, `+`)
Also, don't use reference classes (or R6 classes) unless you actually need reference semantics. KISS and all that.

Resources