When I have many elements in an array, Julia REPL only shows some part of it. For example:
julia> x = rand(100,2);
julia> x
100×2 Array{Float64,2}:
0.277023 0.0826133
0.186201 0.76946
0.534247 0.777725
0.942698 0.0239694
0.498693 0.0285596
⋮
0.383858 0.959607
0.987775 0.20382
0.319679 0.69348
0.491127 0.976363
Is there any way to show all elements in the vertical form as above? print(x) or showall(x) put it in an ugly form without line changes.
NOTE: in 0.7, Base.STDOUT has been renamed to Base.stdout. The rest should work unchanged.
---
There are a lot of internally used methods in base/arrayshow.jl doing stuff related to this. I found
Base.print_matrix(STDOUT, x)
to work. The limiting behaviour can be restored by using an IOContext:
Base.print_matrix(IOContext(STDOUT, :limit => true), x)
However, this method only prints the values, not the header information containing the type. But we can retrieve that header using summary (which I found out looking at this).
Combining both:
function myshowall(io, x, limit = false)
println(io, summary(x), ":")
Base.print_matrix(IOContext(io, :limit => limit), x)
end
Example:
julia> myshowall(STDOUT, x[1:30, :], true)
30×2 Array{Float64,2}:
0.21730681784436 0.5737060668051441
0.6266216317547848 0.47625168078991886
0.9726153326748859 0.8015583406422266
0.2025063774372835 0.8980835847636988
0.5915731785584124 0.14211295083173403
0.8697483851126573 0.10711267862191032
0.2806684748462547 0.1663862576894135
0.87125664767098 0.1927759597335088
0.8106696671235174 0.8771542319415393
0.14276026457365587 0.23869679483621642
0.987513511756988 0.38605250840302996
⋮
0.9587892008777128 0.9823155299532416
0.893979917305394 0.40184945077330836
0.6248799650411605 0.5002213828574473
0.13922016844193186 0.2697416140839628
0.9614124092388507 0.2506075363030087
0.8403420376444073 0.6834231190218074
0.9141176587557365 0.4300133583400858
0.3728064777779758 0.17772360447862634
0.47579213503909745 0.46906998919124576
0.2576800028360562 0.9045669936804894
julia> myshowall(STDOUT, x[1:30, :], false)
30×2 Array{Float64,2}:
0.21730681784436 0.5737060668051441
0.6266216317547848 0.47625168078991886
0.9726153326748859 0.8015583406422266
0.2025063774372835 0.8980835847636988
0.5915731785584124 0.14211295083173403
0.8697483851126573 0.10711267862191032
0.2806684748462547 0.1663862576894135
0.87125664767098 0.1927759597335088
0.8106696671235174 0.8771542319415393
0.14276026457365587 0.23869679483621642
0.987513511756988 0.38605250840302996
0.8230271471019499 0.37242899586931943
0.9138200958138099 0.8068913133278408
0.8525161103718151 0.5975492199077801
0.20865490007184317 0.7176626477090138
0.708988887470049 0.8600690517032243
0.5858885634109547 0.9900228746877875
0.4207526577539027 0.4509115980616851
0.26721679563705836 0.38795692270409465
0.5627701589178917 0.5191793105440308
0.9587892008777128 0.9823155299532416
0.893979917305394 0.40184945077330836
0.6248799650411605 0.5002213828574473
0.13922016844193186 0.2697416140839628
0.9614124092388507 0.2506075363030087
0.8403420376444073 0.6834231190218074
0.9141176587557365 0.4300133583400858
0.3728064777779758 0.17772360447862634
0.47579213503909745 0.46906998919124576
0.2576800028360562 0.9045669936804894
However, I would wait for some opinions about whether print_matrix can be relied upon, given that it is not exported from Base...
A short/clean solution is
show(stdout, "text/plain", rand(100,2))
Related
I am a newbie to Julia and Flux with some experience in Tensorflow Keras and python. I tried to use the Flux.withgradient command to write a user-defined training function with more flexibility. Here is the training part of my code:
loss, grad = Flux.withgradient(modelDQN.evalParameters) do
qEval = modelDQN.evalModel(evalInput)
Flux.mse(qEval, qTarget)
end
Flux.update!(modelDQN.optimizer, modelDQN.evalParameters, grad)
This code works just fine. But if I put the command qEval = modelDQN.evalModel(evalInput) outside the do end loop, as follows:
qEval = modelDQN.evalModel(evalInput)
loss, grad = Flux.withgradient(modelDQN.evalParameters) do
Flux.mse(qEval, qTarget)
end
Flux.update!(modelDQN.optimizer, modelDQN.evalParameters, grad)
The model parameters will not be updated. As far as I know, the do end loop works as an anonymous function that takes 0 arguments. Then why do we need the command qEval = modelDQN.evalModel(evalInput) inside the loop to get the model updated?
The short answer is that anything to be differentiated has to happen inside the (anonymous) function which you pass to gradient (or withgradient), because this is very much not a standard function call -- Zygote (Flux's auto-differentiation library) traces its execution to compute the derivative, and can't transform what it can't see.
Longer, this is Zygote's "implicit" mode, which relies on global references to arrays. The simplest use is something like this:
julia> using Zygote
julia> x = [2.0, 3.0];
julia> g = gradient(() -> sum(x .^ 2), Params([x]))
Grads(...)
julia> g[x] # lookup by objectid(x)
2-element Vector{Float64}:
4.0
6.0
If you move some of that calculation outside, then you make a new array y with a new objectid. Julia has no memory of where this came from, it is completely unrelated to x. They are ordinary arrays, not a special tracked type.
So if you refer to y in the gradient, Zygote cannot infer how this depends on x:
julia> y = x .^ 2 # calculate this outside of gradient
2-element Vector{Float64}:
4.0
9.0
julia> g2 = gradient(() -> sum(y), Params([x]))
Grads(...)
julia> g2[x] === nothing # represents zero
true
Zygote doesn't have to be used in this way. It also has an "explicit" mode which does not rely on global references. This is perhaps less confusing:
julia> gradient(x1 -> sum(x1 .^ 2), x) # x1 is a local variable
([4.0, 6.0],)
julia> gradient(x1 -> sum(y), x) # sum(y) is obviously indep. x1
(nothing,)
julia> gradient((x1, y1) -> sum(y1), x, y)
(nothing, Fill(1.0, 2))
Flux is in the process of changing to use this second form. On v0.13.9 or later, something like this ought to work:
opt_state = Flux.setup(modelDQN.optimizer, modelDQN) # do this once
loss, grads = Flux.withgradient(modelDQN.model) do m
qEval = m(evalInput) # local variable m
Flux.mse(qEval, qTarget)
end
Flux.update!(opt_state, modelDQN.model, grads[1])
I have a function like
function f(a = 1; first = 5, second = "asdf")
return a
end
Is there any way to programatically return a vector with the names of the keyword arguments. Something like:
kwargs(f)
# returns [:first, :second]
I realise that this might be complicated by having multiple methods for a functionname. But I was hoping this would still be possible if the exact method is specified. For instance:
kwargs(methods(f).ms[1])
# returns [:first, :second]
Just use Base.kwarg_decl()
julia> Base.kwarg_decl.(methods(f))
2-element Vector{Vector{Symbol}}:
[]
[:first, :second]
If you need the first parameter a as well you could also try:
julia> Base.method_argnames.(methods(f))
2-element Vector{Vector{Symbol}}:
[Symbol("#self#")]
[Symbol("#self#"), :a]
My question is instead of F = svd(A), can one first allocate an appropriate memory for an SVD structure, and then do F .= svd(A) ?
What I had in mind is something like the following:
function main()
F = Vector{SVD}(undef,10)
# how to preallocate F?
test(F)
end
function test(F::Vector{SVD})
for i in 1:10
F .= svd(rand(3,3))
end
end
Your code almost works. But what you probably wanted was this:
using LinearAlgebra
function main()
F = Vector{SVD}(undef, 10)
test(F)
end
function test(F::Vector{SVD})
for i in 1:10
F[i] = svd(rand(3, 3))
end
return F
end
The line that you had in the for loop was this:
F .= svd(rand(3,3))
which does the same operation on every loop, since you were not indexing into F. In particular, this operation was trying to broadcast a single SVD object into all the fields of F on each iteration of the loop. (And that broadcast operation failed because by default structs are treated as iterable objects with a length method, but SVD does not have a length method.)
However, I would recommend against pre-allocating a vector in this situation. First, let's look at the type of F:
julia> typeof(Vector{SVD}(undef, 10))
Array{SVD,1}
The problem with this vector is that it is parameterized by an abstract type. There is a section in the Performance Tips chapter of the manual that advises against this. SVD is an abstract type because the types of its parameters have not been specified. To make it concrete, you need to specify the types of the parameters, like this:
julia> SVD{Float64,Float64,Array{Float64,2}}
SVD{Float64,Float64,Array{Float64,2}}
julia> Vector{SVD{Float64,Float64,Array{Float64,2}}}(undef, 2)
2-element Array{SVD{Float64,Float64,Array{Float64,2}},1}:
#undef
#undef
As you can see, it is difficult to correctly specify the concrete type when you are working with complicated types like SVD. Additionally, if you do so, your code will not be as generic as it could be.
A better approach for a problem like this is to use mapping, broadcasting, or a list comprehension. Then the correct output type will automatically be inferred. Here are some examples:
List comprehension
julia> [svd(rand(3, 3)) for _ in 1:2]
2-element Array{SVD{Float64,Float64,Array{Float64,2}},1}:
SVD{Float64,Float64,Array{Float64,2}}([-0.6357040496635746 -0.2941425771794837 -0.7136949667270628; -0.45459999623274916 -0.6045700314848496 0.654090147040599; -0.6238743500629883 0.7402534845042064 0.2506104028424691], [1.4535849689665463, 0.7212190827260345, 0.05010669163393896], [-0.5975505057447164 -0.588792736048385 -0.5442945039782142; 0.7619724725128861 -0.6283345569895092 -0.15682358121595258; -0.2496624605679292 -0.5084474392397449 0.8241054891903787])
SVD{Float64,Float64,Array{Float64,2}}([-0.5593632049776268 0.654338345992878 -0.5088753618327984; -0.6687620652652163 -0.7189576326033171 -0.18936003428293915; -0.4897653570633183 0.23439550227070827 0.8397551092645418], [1.8461274187259178, 0.21226179692488983, 0.14194607536315287], [-0.29089551972856004 -0.7086270946133293 -0.6428276887173754; -0.9203610429640889 0.023709029028269546 0.390350397126212; 0.2613720474647311 -0.7051847436823973 0.6590896221923739])
Map
julia> map(_ -> svd(rand(3, 3)), 1:2)
2-element Array{SVD{Float64,Float64,Array{Float64,2}},1}:
SVD{Float64,Float64,Array{Float64,2}}([-0.5807809149601634 0.5635242755434755 0.5874809951745127; -0.6884131975465821 0.0451903888051729 -0.7239095925620322; -0.43448912329507794 -0.8248625459025509 0.3616918330643316], [1.488618654040125, 0.4122166626927311, 0.004235624485479941], [-0.6721098925787947 -0.2684664121709399 -0.6900681689759235; -0.7384292974335966 0.31185073633575333 0.5978890289498324; -0.05468514413847799 -0.9114136842196914 0.4078414290231468])
SVD{Float64,Float64,Array{Float64,2}}([-0.3677873424759118 0.8090638526628051 -0.4584191892023337; -0.43071684640222546 -0.5851169278783189 -0.6871107472129654; -0.8241452960126802 -0.055261768200600137 0.5636760310989947], [1.6862363968739773, 0.5899255050748418, 0.24246688716190598], [-0.3751742784957875 -0.7172409091515735 -0.5872050229643736; 0.8600668700980193 -0.505618838823938 0.06807766730822862; -0.3457300098559026 -0.4794945964927631 0.8065703268899])
Broadcasting
julia> g = (rand(3, 3) for _ in 1:2)
Base.Generator{UnitRange{Int64},var"#17#18"}(var"#17#18"(), 1:2)
julia> svd.(g)
2-element Array{SVD{Float64,Float64,Array{Float64,2}},1}:
SVD{Float64,Float64,Array{Float64,2}}([-0.7988295268840152 0.5443221484534134 -0.256095266807727; -0.5436890668169485 -0.8354777569473182 -0.0798693700362902; -0.257436566171119 0.07543418554831638 0.963346302244777], [1.8188722412547844, 0.3934389096422389, 0.2020398396772306], [-0.7147404794808727 -0.37763644211761316 -0.5886737335538281; -0.6944558966482991 0.4830041206449164 0.5333273169925189; -0.08292800854873916 -0.7899985677359054 0.607474450798845])
SVD{Float64,Float64,Array{Float64,2}}([-0.5910620103531503 0.3599866268397522 0.7218416228050514; -0.7367495542691711 0.12340124384185132 -0.664809918173956; -0.3283988340440176 -0.9247603805931685 0.1922821996018057], [1.826019614357666, 0.5333148215847028, 0.11639139812894106], [-0.6415954756495915 -0.6888196183142843 -0.33746522643279503; -0.5845558664639438 0.7239484700883465 -0.3663236978948133; -0.4966383841474222 0.037764349353666515 0.8671356118331964])
Furthermore, mapping, broadcasting, and list comprehensions should be just as efficient as pre-allocating the vector. If you're doing a simple mapping, then it's usually easier and more readable to use mapping, broadcasting, or list comprehensions. Pre-allocating vectors is a tool I reserve for writing custom algorithms from scratch.
A final note. In most cases, type parameters are considered an implementation detail and are not a part of the public API for a type. As such, it's best to use generic programming approaches that do not rely on fixing the types for type parameters. Of course there are some exceptions to this rule of thumb, like Array{T,N} and Dict{K,V}.
There's a differnent way of preallocation -- you can reuse the input array by always overwriting it, with both the rand call and svd's internal needs:
function test!(F::Vector{SVD})
A = Matrix{Float64}(undef, 3, 3)
for i in 1:10
rand!(A)
F[i] = svd!(A)
end
end
Cameron's advice still holds. I'd probably use something like
function test()
A = Matrix{Float64}(undef, 3, 3)
return map(1:10) do i
svd!(rand!(A))
end
end
given that the number of loops seems not be the critical part.
Consider an existing function in Base, which takes in a variable number of arguments of some abstract type T. I have defined a subtype S<:T and would like to write a method which dispatches if any of the arguments is my subtype S.
As an example, consider function Base.cat, with T being an AbstractArray and S being some MyCustomArray <: AbstractArray.
Desired behaviour:
julia> v = [1, 2, 3];
julia> cat(v, v, v, dims=2)
3×3 Array{Int64,2}:
1 1 1
2 2 2
3 3 3
julia> w = MyCustomArray([1,2,3])
julia> cat(v, v, w, dims=2)
"do something fancy"
Attempt:
function Base.cat(w::MyCustomArray, a::AbstractArray...; dims)
pritnln("do something fancy")
end
But this only works if the first argument is MyCustomArray.
What is an elegant way of achieving this?
I would say that it is not possible to do it cleanly without type piracy (but if it is possible I would also like to learn how).
For example consider cat that you asked about. It has one very general signature in Base (actually not requiring A to be AbstractArray as you write):
julia> methods(cat)
# 1 method for generic function "cat":
[1] cat(A...; dims) in Base at abstractarray.jl:1654
You could write a specific method:
Base.cat(A::AbstractArray...; dims) = ...
and check if any of elements of A is your special array, but this would be type piracy.
Now the problem is that you cannot even write Union{S, T} as since S <: T it will be resolved as just T.
This would mean that you would have to use S explicitly in the signature, but then even:
f(::S, ::T) = ...
f(::T, ::S) = ...
is problematic and a compiler will ask you to define f(::S, ::S) as the above definitions lead to dispatch ambiguity. So, even if you wanted to limit the number of varargs to some maximum number you would have to annotate types for all divisions of A into subsets to avoid dispatch ambiguity (which is doable using macros, but grows the number of required methods exponentially).
For general usage, I concur with Bogumił, but let me make an additional comment. If you have control over how cat is called, you can at least write some kind of trait-dispatch code:
struct MyCustomArray{T, N} <: AbstractArray{T, N}
x::Array{T, N}
end
HasCustom() = Val(false)
HasCustom(::MyCustomArray, rest...) = Val(true)
HasCustom(::AbstractArray, rest...) = HasCustom(rest...)
# `IsCustom` or something would be more elegant, but `Val` is quicker for now
Base.cat(::Val{true}, args...; dims) = println("something fancy")
Base.cat(::Val{false}, args...; dims) = cat(args...; dims=dims)
And the compiler is cool enough to optimize that away:
julia> args = (v, v, w);
julia> #code_warntype cat(HasCustom(args...), args...; dims=2);
Variables
#self#::Core.Compiler.Const(cat, false)
#unused#::Core.Compiler.Const(Val{true}(), false)
args::Tuple{Array{Int64,1},Array{Int64,1},MyCustomArray{Int64,1}}
Body::Nothing
1 ─ %1 = Main.println("something fancy")::Core.Compiler.Const(nothing, false)
└── return %1
If you don't have control over calls to cat, the only resort I can think of to make the above technique work is to overdub methods containing such call, to replace matching calls by the custom implementation. In which case you don't even need to overload cat, but can directly replace it by some mycat doing your fancy stuff.
I already have output some content in the REPL. Is there any function to print all this content to a file?
If those outputs have already been printed in the REPL, I guess the only way to save them to a file is copy-pasting manually. But if you would like to save the REPL output history for future use, one way is to overload display:
shell> touch repl_history.txt
julia> using REPL
julia> function REPL.display(d::REPL.REPLDisplay, mime::MIME"text/plain", x)
io = REPL.outstream(d.repl)
get(io, :color, false) && write(io, REPL.answer_color(d.repl))
if isdefined(d.repl, :options) && isdefined(d.repl.options, :iocontext)
# this can override the :limit property set initially
io = foldl(IOContext, d.repl.options.iocontext,
init=IOContext(io, :limit => true, :module => Main))
end
show(io, mime, x)
println(io)
open("repl_history.txt", "a") do f
show(f, mime, x)
println(f)
end
nothing
end
then, let's print something random in the REPL:
julia> rand(10)
10-element Array{Float64,1}:
0.37537591915616497
0.9478991508737484
0.32628512501942475
0.8888960925262224
0.9967927432272801
0.4910769590205608
0.7624517049991089
0.26310423494973545
0.5117608425961135
0.0762255311602309
help?> gcd
search: gcd gcdx significand
gcd(x,y)
Greatest common (positive) divisor (or zero if x and y are both zero).
Examples
≡≡≡≡≡≡≡≡≡≡
julia> gcd(6,9)
3
julia> gcd(6,-9)
3
And here is what the file content looks like:
shell> cat repl_history.txt
10-element Array{Float64,1}:
0.37537591915616497
0.9478991508737484
0.32628512501942475
0.8888960925262224
0.9967927432272801
0.4910769590205608
0.7624517049991089
0.26310423494973545
0.5117608425961135
0.0762255311602309
gcd(x,y)
Greatest common (positive) divisor (or zero if x and y are both zero).
Examples
≡≡≡≡≡≡≡≡≡≡
julia> gcd(6,9)
3
julia> gcd(6,-9)
3
If there is no need to work with REPL interactively, simply use julia script.jl > output.txt might also do the trick.
What you've printed isn't saved anywhere, so no, there's no way to do that. It may be that there's something easy to do but without more details it's impossible to way really.
If you want to save variables to file, you can you JLD2 package. Then you can save each variable as below:
using JLD2, FileIO
hello = "world"
foo = :bar
#save "example.jld2" hello foo