I’m trying to count things in a Julia list with the goal of plotting a histogram. These things may be other arrays to simpler objects like Strings or Integers. My function is currently using the counter library, which works great for non-complex objects like strings or integers.
function viz(data::Vector)
counts = counter(data)
k = [x for x in keys(counts)]
v = [x for x in values(counts)]
bar(k, v./sum(v))
end
In Python, I’d just do str(x) for x in the_list To convert the inner element to strings, but I’m having trouble figuring out how to do this in Julia.
Or is there a better way to count complex objects in Julia? (I’m a beginner at Julia)
[string(x) for x in the_list]
# or
[String(x) for x in the_list]
one of them probably gives what you want
Take a look at StatsBase.countmap(x), if it does what you need.
I have functions f1 and f2 returning matrices m1 and m2, which are calculated using Diagonal, Tridiagonal, SymTridiagonal from LinearAlgebra package.
In a new function f3 I try computing
j = m1 - m2*im
m3 = exp(j)
but this gives a Method error on computation unless I use j=Matrix(m1-m2*im), saying that no matching method for exp(::LinearAlgebra.Tridiagonal ...)
My question is how can I do this computation in the most optimal way? I am a total beginner in Julia.
Unless you have a very special structure of j (i.e. if its exponential is sparse - which is unlikely) the best you can do AFAICT is to use a dense matrix as an input to exp:
m3 = LinearAlgebra.exp!([float(x) for x in Tridiagonal(dl, d, du)])
If you expect m3 to be sparse then I think currently there is no special algorithm implemented for that case in Julia.
Note that I use exp! to do operation in place and use a comprehension to make sure the argument to exp! is dense. As exp! expects LinearAlgebra.BlasFloat (that is Union{Complex{Float32}, Complex{Float64}, Float32, Float64}) I use float to make sure that elements of j are appropriately converted. Note that it might fail if you work with e.g. BigFloat or Float16 values - in this case you have to do an appropriate conversion to the expected types yourself.
I think Julia handles matrices with complex elements correctly.
My task is to modify the spectrum of a Hermitian matrix H and return just the matrix with modified spectrum. i.e. I have a function f(real_vec)->real_vec that modifies the spectrum s(H) of a hermitian matrix H=U[s(H)]U'. I need the result f(H) = U[f(s(H))]U'. I think it is possible to optimize by not computing explicitly the eigfact(H).
Therefore I tried to write my own eigmodif based on the Julia realization of eigfact. That was difficult because I was lost on the line 4816 in lapack.jl, where syevr() is wrapped up.
I need to understand where and how, Julia has converted a COMPLEX HERMITIAN matrix to a REAL-SYMMETRIC one. Theoretically it is possible, since we have a 2n by 2n matrix J that squares to minus identity; for any n by n COMPLEX HERMITIAN matrix H we then turn it into real(H).I + imag(H).J, or in a block form
[ real(H) -imag(H) ]
[ imag(H) real(H) ]
But how does Julia do this?
Not an expert on LAPACK, but perhaps the use of macros in the definition of the eigensolvers is unclear. From linalg/lapack.jl (around line 4900):
# Hermitian eigensolvers
for (syev, syevr, sygvd, elty, relty) in
((:zheev_,:zheevr_,:zhegvd_,:Complex128,:Float64),
(:cheev_,:cheevr_,:chegvd_,:Complex64,:Float32))
#eval begin
# SUBROUTINE ZHEEV( JOBZ, UPLO, N, A, LDA, W, WORK, LWORK, RWORK, INFO )
# * .. Scalar Arguments ..
# CHARACTER JOBZ, UPLO
⋮
⋮
So the macro code uses $syevr as a placeholder to refer to :zheevr_ and :cheevr_ in the two passes through the loop, defining the same syevr! for different type signatures. These are LAPACK functions dedicated to Hermitian matrices and accept complex inputs. So the meat of the calculation and complex number handling goes on inside LAPACK.
I need to build a matrix with extremely small entries.
So far I realized that the fastest way to define the kind of matrix that I need is:
Define a vectorized function of coordinates:
func = function(m,n){...}
Combine every possible coordinate using outer:
matrix = outer(1:100,1:100,FUN=func)
Having to deal with extremely small numbers I work in func's environment using brob numbers, its output will therefore be of the same type of a brob:
typeof(func(0:100,0:100) )
[1] "S4"
If I directly plug two vectors 0:100 in my function func it returns a vector of brobs but if I try to use it with outer I get the error:
Error in outer(1:100, 1:100, FUN = func) : invalid first argument
I suppose this is because package Brobdingnag can somehow deal with vectors but not with matrices. Is it right? Is there any way to make it work?
I have a five-dimensional rootfinding problem I'd like to solve from within a Sage notebook, but the functions I wish to solve depend on other parameters that shouldn't be varied during the rootfinding. Figuring out how to set up a call to, say, scipy.optimize.newton_krylov has got me stumped. So let's say I have (with a,b,c,d,e the parameters I want to vary, F1,F2,F3,F4,F5 the five expressions I which to solve to be equal to F1Val,F2Val,F3Val,F4Val,F5Val, values I already know, and posVal another known parameter)
def func(a, b, c, d, e, F1Val, F2Val, F3Val, F4Val, F5Val, posVal):
F1.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F2.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F3.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F4.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F5.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
return (F1-F1Val, F2-F2Val, F3-F3Val, F4-F4Val, F5-F5Val)
and now I want to pass this to a rootfinding function to yield func = (0,0,0,0,0). I want to pass an initial guess (a0, b0, c0, d0, e0) vector and a set of arguments (F1Val, F2Val, F3Val, F4Val, F5Val, posVal) for the evaluation, but I can't figure out how to do this. Is there a standard technique for this sort of thing? The multidimensional rootfinders in scipy seem to be lacking the args=() variable that the 1D rootfinders offer.
Best,
-user2275987
Well, I'm still not sure how to actually employ the Newton-Raphson method here, but using fsolve works, for functions that accept a vector of variables and a vector of constant arguments. I'm reproducing my proof of concept here
def tstfunc(xIn, constIn):
x = xIn[0]
y = xIn[1]
a = constIn[0]
b = constIn[1]
out = [x+2*y+a]
out.append(a*x*y+b)
return out
from scipy.optimize import fsolve
ans = fsolve(tstfunc, x0=[1,1], args=[0.3, 2.1])
print ans