As part of a larger algorithm, I need to produce the residuals of an array relative to a specified limit. In other words, I need to produce an array which, given someArray, comprises elements which encode the amount by which the corresponding element of someArray exceeds a limit value. My initial inclination was to use a distributed comparison to determine when a value has exceeded the threshold. As follows:
# Generate some test data.
residualLimit = 1
someArray = 2.1.*(rand(10,10,3).-0.5)
# Determine the residuals.
someArrayResiduals = (residualLimit-someArray)[(residualLimit-someArray.<0)]
The problem is that the someArrayResiduals is a one-dimensional vector containing the residual values, rather than a mask of (residualLimit-someArray). If you check [(residualLimit-someArray.<0)] you'll find that it is behaving as expected; it's producing a BitArray. The question is, why doesn't Julia allow to use this BitArray to mask someArray?
Casting the Bools in the BitArray to Ints using int() and distributing using .*produces the desired result, but is a bit inelegant... See the following:
# Generate some test data.
residualLimit = 1
someArray = 2.1.*(rand(10,10,3).-0.5)
# Determine the residuals.
someArrayResiduals = (residualLimit-someArray).*int(residualLimit-someArray.<0)
# This array should be (and is) limited at residualLimit. This is correct...
someArrayLimited = someArray + someArrayResiduals
Anyone know why a BitArray can't be used to mask an array? Or, any way that this entire process can be simplified?
Thanks, all!
Indexing with a logical array simply selects the elements at indices where the logical array is true. You can think of it as transforming the logical index array with find before doing the indexing expression. Note that this can be used in both array indexing and indexed assignment. These logical arrays are often themselves called masks, but indexing is more like a "selection" operation than a clamping operation.
The suggestions in the comments are good, but you can also solve your problem using logical indexing with indexed assignment:
overLimitMask = someArray .> residualLimit
someArray[overLimitMask] = residualLimit
In this case, though, I think the most readable way to solve this problem is with min or clamp: min(someArray, residualLimit) or clamp(someArray, -residualLimit, residualLimit)
Related
I have been trying to implement some code in Julia JuMP. The idea of my code is that I have a for loop inside my while loop that runs S times. In each of these loops I solve a subproblem and get some variables as well as opt=1 if the subproblem was optimal or opt=0 if it was not optimal. Depending on the value of opt, I have two types of constraints, either optimality cuts (if opt=1) or feasibility cuts (if opt=0). So the intention with my code is that I only add all of the optimality cuts if there are no feasibility cuts for s=1:S (i.e. we get opt=1 in every iteration from 1:S).
What I am looking for is a better way to save the values of ubar, vbar and wbar. Currently I am saving them one at a time with the for-loop, which is quite expensive.
So the problem is that my values of ubar,vbar and wbar are sparse axis arrays. I have tried to save them in other ways like making a 3d sparse axis array, which I could not get to work, since I couldn't figure out how to initialize it.
The below code works (with the correct code inserted inside my <>'s of course), but does not perform as well as I wish. So if there is some way to save the values of 2d sparse axis arrays more efficiently, I would love to know it! Thank you in advance!
ubar2=zeros(nV,nV,S)
vbar2=zeros(nV,nV,S)
wbar2=zeros(nV,nV,S)
while <some condition>
opts=0
for s=1:S
<solve a subproblem, get new ubar,vbar,wbar and opt=1 if optimal or 0 if not>
opts+=opt
if opt==1
# Add opt cut Constraints
for i=1:nV
for k=1:nV
if i!=k
ubar2[i,k,s]=ubar[i,k]
end
end
for j=i:nV
if links[i,j]==1
vbar2[i,j,s]=vbar[i,j]
wbar2[i,j,s]=wbar[i,j]
end
end
end
else
# Add feas cut Constraints
#constraint(mas, <constraint from ubar,vbar,wbar> <= 0)
break
end
if opts==S
for s=1:S
#constraint(mas, <constraint from ubar2,vbar2,wbar2> <= <some variable>)
end
end
end
A SparseAxisArray is simply a thin wrapper in top of a Dict.
It was defined such that when the user creates a container in a JuMP macro, whether he gets an Array, a DenseAxisArray or a SparseAxisArray, it behaves as close as possible to one another hence the user does not need to care about what he obtained for most operations.
For this reason we could not just create a Dict as it behaves differently as an array. For instance you cannot do getindex with multiple indices as x[2, 2].
Here you can use either a Dict or a SparseAxisArray, as you prefer.
Both of them have O(1) complexity for setting and getting new elements and a sparse storage which seems to be adequate for what you need.
If you choose SparseAxisArray, you can initialize it with
ubar2 = JuMP.Containers.SparseAxisArray(Dict{Tuple{Int,Int,Int},Float64}())
and set it with
ubar2[i,k,s]=ubar[i,k]
If you choose Dict, you can initialize it with
ubar2 = Dict{Tuple{Int,Int,Int},Float64}()
and set it with
ubar2[(i,k,s)]=ubar[i,k]
I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).
I would like to check if a variable is scalar in julia, such as Integer, String, Number, but not AstractArray, Tuple, type, struct, etc. Is there a simple method to do this (i.e. isscalar(x))
The notion of what is, or is not a scalar is under-defined without more context.
Mathematically, a scalar is defined; (Wikipedia)
A scalar is an element of a field which is used to define a vector space.
That is to say, you need to define a vector space, based on a field, before you can determine if something is, or is not a scalar (relative to that vector space.).
For the right vector space, tuples could be a scalar.
Of-course we are not looking for a mathematically rigorous definition.
Just a pragmatic one.
Base it off what Broadcasting considers to be scalar
I suggest that the only meaningful way in which a scalar can be defined in julia, is of the behavior of broadcast.
As of Julia 1:
using Base.Broadcast
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = BroadcastStyle(T) isa Broadcast.DefaultArrayStyle{0}
See the docs for Broadcast.
In julia 0.7, Scalar is the default. So it is basically anything that doesn't have specific broadcasting behavior, i.e. it knocks out things like array and tuples etc.:
using Base.Broadcast
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = BroadcastStyle(T) isa Broadcast.Scalar
In julia 0.6 this is a bit more messy, but similar:
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = Base.Broadcast._containertype(T)===Any
The advantage of using the methods for Broadcast to determine if something is scalar, over using your own methods, is that anyone making a new type that is going to act in a scalar way must make sure it works with those methods correctly
(or actually nonscalar since scalar is the default.)
Structs are not not scalar
That is to say: sometimes structs are scalar and sometimes they are not and it depends on the struct.
Note however that these methods do not consider struct to be non-scalar.
I think you are mistaken in your desire to.
Julia structs are not (necessarily or usually) a collection type.
Consider that: BigInteger, BigFloat, Complex128 etc etc
are all defined using structs
I was tempted to say that having a start method makes a type nonscalar, but that would be incorrect as start(::Number) is defined.
(This has been debated a few times)
For completeness, I am copying Tasos Papastylianou's answer from the comments to here. If all you want to do is distinguish scalars from arrays you can use:
isa(x, Number)
This will output true if x is a Number (like a float or an int), and output false if x is an Array (vector, matrix, etc.)
I found myself needing to capture the notion of if something was scalar or not recently in MultiResolutionIterators.jl.
I found the boardcasting based rules from the other answer,
did not meet my needs.
In particular I wanted to consider strings as nonscalar.
I defined a trait,
bases on method_exists(start, (T,)),
with some exceptions as mentioned e.g. for Number.
abstract type Scalarness end
struct Scalar <: Scalarness end
struct NotScalar <: Scalarness end
isscalar(::Type{Any}) = NotScalar() # if we don't know the type we can't really know if scalar or not
isscalar(::Type{<:AbstractString}) = NotScalar() # We consider strings to be nonscalar
isscalar(::Type{<:Number}) = Scalar() # We consider Numbers to be scalar
isscalar(::Type{Char}) = Scalar() # We consider Sharacter to be scalar
isscalar(::Type{T}) where T = method_exists(start, (T,)) ? NotScalar() : Scalar()
Something similar is also done by AbstractTrees.jl
isscalar(x) == applicable(start, x) && !isa(x, Integer) && !isa(x, Char) && !isa(x, Task)
Good Morning,
I have an array with about 3000 double values, I need to find all local minimum and maximum, for this I'm interested to first and second derivative, what's best way to achieve this with Apache Commons Math? My trouble is that I'm starting directly from the array, not from a function like sin(x).
Thanks
With just an array you wont be able to find a min/max.
If the array was calcualted from a known function, then you could differentiate it numerically (just calculate at X and X + epsilon, and divide by epsilon, assuming that there's a single parameter that you're differentating with respect to).
Alternatively, is the array actually the list of coefficients of a big polynomial? If so, then the same approach might work.
I'm new to OCaml, and I'd like to implement Gaussian Elimination as an exercise. I can easily do it with a stateful algorithm, meaning keep a matrix in memory and recursively operating on it by passing around a reference to it.
This statefulness, however, smacks of imperative programming. I know there are capabilities in OCaml to do this, but I'd like to ask if there is some clever functional way I haven't thought of first.
OCaml arrays are mutable, and it's hard to avoid treating them just like arrays in an imperative language.
Haskell has immutable arrays, but from my (limited) experience with Haskell, you end up switching to monadic, mutable arrays in most cases. Immutable arrays are probably amazing for certain specific purposes. I've always imagined you could write a beautiful implementation of dynamic programming in Haskell, where the dependencies among array entries are defined entirely by the expressions in them. The key is that you really only need to specify the contents of each array entry one time. I don't think Gaussian elimination follows this pattern, and so it seems it might not be a good fit for immutable arrays. It would be interesting to see how it works out, however.
You can use a Map to emulate a matrix. The key would be a pair of integers referencing the row and column. You'll want to use your own get x y function to ensure x < n and y < n though, instead of accessing the Map directly. (edit) You can use the compare function in Pervasives directly.
module OrderedPairs = struct
type t = int * int
let compare = Pervasives.compare
end
module Pairs = Map.Make (OrderedPairs)
let get_ n set x y =
assert( x < n && y < n );
Pairs.find (x,y) set
let set_ n set x y v =
assert( x < n && y < n );
Pairs.add (x,y) set v
Actually, having a general set of functions (get x y and set x y at a minimum), without specifying the implementation, would be an even better option. The functions then can be passed to the function, or be implemented in a module through a functor (a better solution, but having a set of functions just doing what you need would be a first step since you're new to OCaml). In this way you can use a Map, Array, Hashtbl, or a set of functions to access a file on the hard-drive to implement the matrix if you wanted. This is the really important aspect of functional programming; that you trust the interface over exploiting the side-effects, and not worry about the underlying implementation --since it's presumed to be pure.
The answers so far are using/emulating mutable data-types, but what does a functional approach look like?
To see, let's decompose the problem into some functional components:
Gaussian elimination involves a sequence of row operations, so it is useful first to define a function taking 2 rows and scaling factors, and returning the resultant row operation result.
The row operations we want should eliminate a variable (column) from a particular row, so lets define a function which takes a pair of rows and a column index and uses the previously defined row operation to return the modified row with that column entry zero.
Then we define two functions, one to convert a matrix into triangular form, and another to back-substitute a triangular matrix to the diagonal form (using the previously defined functions) by eliminating each column in turn. We could iterate or recurse over the columns, and the matrix could be defined as a list, vector or array of lists, vectors or arrays. The input is not changed, but a modified matrix is returned, so we can finally do:
let out_matrix = to_diagonal (to_triangular in_matrix);
What makes it functional is not whether the data-types (array or list) are mutable, but how they they are used. This approach may not be particularly 'clever' or be the most efficient way to do Gaussian eliminations in OCaml, but using pure functions lets you express the algorithm cleanly.