How to use assocs with HashMap in Haskell - dictionary

Currently I'm trying to use the Map's assocs method, but unable to figure out how to get it to work for a HashMap. For a regular Map the following works just fine.
import qualified Data.Map as M
test = M.fromList [("a", 1), ("b", 2)]
M.assocs test
However when I try the same thing with a HashMap it doesn't work. I tried several variation on the import all fail with different errors. Oddly however most other functions that work on maps work just fine with the below import, for example I have no trouble using M.lookup.
import qualified Data.HashMap.Lazy as M
test = M.fromList [("a", 1), ("b", 2)]
M.assocs test
In case it is useful the above code gives the following error:
<interactive>:1:1: error:
Not in scope: ‘M.assocs’
No module named ‘M’ is imported.

Data.HashMap.Lazy, from unordered-containers, does not export an assocs function.
You might be thinking of Data.HashMap from the hashmap package.

I figured out the answer. In Data.HashMap.Lazy the method toList performs the same function as assocs. As such the following code works.
import qualified Data.HashMap.Lazy as M
test = M.fromList [("a", 1), ("b", 2)]
M.toList test

Related

Finding a Module's path, using the Module object

What is the sane way to go from a Module object to a path to the file in which it was declared?
To be precise, I am looking for the file where the keyword module occurs.
The indirect method is to find the location of the automatically defined eval method in each module.
moduleloc(mm::Module) = first(functionloc(mm.eval, (Symbol,)))
for example
moduleloc(mm::Module) = first(functionloc(mm.eval, (Symbol,)))
using DataStructures
moduleloc(DataStructures)
Outputs:
/home/oxinabox/.julia/v0.6/DataStructures/src/DataStructures.jl
This indirect method works, but it feels like a bit of a kludge.
Have I missed some inbuilt function to do this?
I will remind answered that Modules are not the same thing as packages.
Consider the existence of submodules, or even modules that are being loaded via includeing some abolute path that is outside the package directory or loadpath.
Modules simply do not store the file location where they were defined. You can see that for yourself in their definition in C. Your only hope is to look through the bindings they hold.
Methods, on the other hand, do store their file location. And eval is the one function that is defined in every single module (although not baremodules). Slightly more correct might be:
moduleloc(mm::Module) = first(functionloc(mm.eval, (Any,)))
as that more precisely mirrors the auto-defined eval method.
If you aren't looking for a programmatic way of doing it you can use the methods function.
using DataFrames
locations = methods(DataFrames.readtable).ms
It's for all methods but it's hardly difficult to find the right one unless you have an enormous number of methods that differ only in small ways.
There is now pathof:
using DataStructures
pathof(DataStructures)
"/home/ederag/.julia/packages/DataStructures/59MD0/src/DataStructures.jl"
See also: pkgdir.
pkgdir(DataStructures)
"/home/ederag/.julia/packages/DataStructures/59MD0"
Tested with julia-1.7.3
require obviously needs to perform that operation. Looking into loading.jl, I found that finding the module path has changed a bit recently: in v0.6.0, there is a function
load_hook(prefix::String, name::String, ::Void)
which you can call "manually":
julia> Base.load_hook(Pkg.dir(), "DataFrames", nothing)
"/home/philipp/.julia/v0.6/DataFrames/src/DataFrames.jl"
However, this has changed to the better in the current master; there's now a function find_package, which we can copy:
macro return_if_file(path)
quote
path = $(esc(path))
isfile(path) && return path
end
end
function find_package(name::String)
endswith(name, ".jl") && (name = chop(name, 0, 3))
for dir in [Pkg.dir(); LOAD_PATH]
dir = abspath(dir)
#return_if_file joinpath(dir, "$name.jl")
#return_if_file joinpath(dir, "$name.jl", "src", "$name.jl")
#return_if_file joinpath(dir, name, "src", "$name.jl")
end
return nothing
end
and add a little helper:
find_package(m::Module) = find_package(string(module_name(m)))
Basically, this takes Pkg.dir() and looks in the "usual locations".
Additionally, chop in v0.6.0 doesn't take these additional arguments, which we can fix by adding
chop(s::AbstractString, m, n) = SubString(s, m, endof(s)-n)
Also, if you're not on Unix, you might want to care about the definitions of isfile_casesensitive above the linked code.
And if you're not so concerned about corner cases, maybe this is enough or can serve as a basis:
function modulepath(m::Module)
name = string(module_name(m))
Pkg.dir(name, "src", "$name.jl")
end
julia> Pkg.dir("DataStructures")
"/home/liso/.julia/v0.7/DataStructures"
Edit: I now realized that you want to use Module object!
julia> m = DataStructures
julia> Pkg.dir(repr(m))
"/home/liso/.julia/v0.7/DataStructures"
Edit2: I am not sure if you are trying to find path to module or to object defined in module (I hope that parsing path from next result is easy):
julia> repr(which(DataStructures.eval, (String,)))
"eval(x) in DataStructures at /home/liso/.julia/v0.7/DataStructures/src/DataStructures.jl:3"

Correcting flowtype module.name_mapper path

I am unable to get the proper regex working to map
import immutable from 'npm:immutable' to import immutable from 'immutable'
Anyone have a good suggestion?
This should do the trick
# Map imports named npm:<package> to <package>
module.name_mapper='^npm:\(.*\)$' -> '\1'
The \1 represents everything caught in the first capture group \(.*\) so in your case that would be immutable

Python function object on Map function going weird. (Spark)

I have a dictionary that maps a key to a function object. Then, using Spark 1.4.1 (Spark may not even be relevant for this question), I try to map each object in the RDD using a function object retrieved from the dictionary (acts as look-up table). e.g. a small snippet of my code:
fnCall = groupFnList[0].fn
pagesRDD = pagesRDD.map(lambda x: [x, fnCall(x[0])]).map(shapeToTuple)
Now, it has fetched from a namedtuple the function object. Which I temporarily 'store' (c.q. pointing to fn obj) in FnCall. Then, using the map operations I want the x[0] element of each tuple to be processed using that function.
All works fine and good in that there indeed IS a fn object, but it behaves in a weird way.
Each time I call an action method on the RDD, even without having used a fn obj in between, the RDD values have changed! To visualize this I have created dummy functions for the fn objects that just output a random integer. After calling the fn obj on the RDD, I can inspect it with .take() or .first() and get the following:
pagesRDD.first()
>>> [(u'myPDF1.pdf', u'34', u'930', u'30')]
pagesRDD.first()
>>> [(u'myPDF1.pdf', u'23', u'472', u'11')]
pagesRDD.first()
>>> [(u'myPDF1.pdf', u'4', u'69', u'25')]
So it seems to me that the RDD's elements have the functions bound to them in some way, and each time I do an action operation (like .first(), very simple) it 'updates' the RDD's contents.
I don't want this to happen! I just want the function to process the RDD ONLY when I call it with a map operation. How can I 'unbind' this function after the map operation?
Any ideas?
Thanks!
####### UPDATE:
So apparently rewriting my code to call it like pagesRDD.map(fnCall) should do the trick, but why should this even matter? If I call
rdd = rdd.map(lambda x: (x,1))
rdd.first()
>>> # some output
rdd.first()
>>> # same output as before!
So in this case, using a lambda function it would not get bound to the rdd and would not be called each time I do a .take()-like action. So why is that the case when I use a fn object INSIDE the lambda? Logically it just does not make sense to me. Any explanation on this?
If you redefine your functions that their parameter is an iterable. Your code should look like this.
pagesRDD = pagesRDD.map(fnCall).map(shapeToTuple)

parallel list comprehension using Pool map

I have a list comprehension:
thingie=[f(a,x,c) for x in some_list]
which I am parallelising as follows:
from multiprocessing import Pool
pool=Pool(processes=4)
thingie=pool.map(lambda x: f(a,x,c), some_list)
but I get the following error:
_pickle.PicklingError: Can't pickle <function <lambda> at 0x7f60b3b0e9d8>:
attribute lookup <lambda> on __main__ failed
I have tried to install the pathos package which apparently addresses this issue, but when I try to import it I get the error:
ImportError: No module named 'pathos'
OK, so this answer is just for the record, I've figured it out with author of the question during comment conversation.
multiprocessing needs to transport every object between processes, so it uses pickle to serialize it in one process and deserialize in another. It all works well, but pickle cannot serialize lambda. AFAIR it is so because pickle needs functions source to serialize it, and lambda won't have it, but I'm not 100% sure and cannot quote my source.
It won't be any problem if you use map() on 1 argument function - you can pass that function instead of lambda. If you have more arguments, like in your example, you need to define some wrapper with def keyword:
from multiprocessing import Pool
def f(x, y, z):
print(x, y, z)
def f_wrapper(y):
return f(1, y, "a")
pool = Pool(processes=4)
result = pool.map(f_wrapper, [7, 9, 11])
Just before I close this, I found another way to do this with Python 3, using functools,
say I have a function f with three variables f(a,x,c), one of which I want to may, say x. I can use the following code to do basically what #FilipMalczak suggests:
import functools
from multiprocessing import Pool
f1=functools.partial(f,a=10)
f2=functools.partial(f2,c=10)
pool=Pool(processes=4)
final_answer=pool.map(f2,some_list)

Do I get 3-address code in Frama-c

I just started developing a frama-c plugin that is doing some kind of alias analysis. I'm using the Dataflow.Backwards analysis and now I have to go through the different assignment statements and collect some stuff about the lvalues.
Does frama-c provide me with 3-address code? Do I have some guarantees about the shape of the lvalue (or any memory access)? I mean, sth like in soot or wala that there is at most one field access, s.t., for a->b->c, there would be a temp variable like tmp=a->b; tmp->c;? I checked the manuals, but I couldn't find anything related to this.
No, there is no such normalization in Frama-C. If you really need it, you can first use a visitor in order to normalize the code so that it suits the requirements of your plug-in. It'd go like that:
class normalize prj: Visitor.frama_c_visitor =
object
inherit Visitor.frama_c_copy prj
method vinstr i =
match i with
| Set (lv,e) -> ...
....
end
let analyze () = ...
let run () =
let my_prj = File.create_project_from_visitor "my_project" (fun prj -> new normalize prj) in
Project.on my_prj analyze ()
The following module from Cil does probably what you want:
http://www.cs.berkeley.edu/~necula/cil/ext.html#toc26. Be aware that the type of the resulting AST is the standard Cil one. You won't be getting any help from the OCaml compiler as to which constructs can be present in the simplified AST, and which ones cannot.
Note also that this module has not been ported to Frama-C so far. You will need some minor adaptation to make it work within Frama-C.

Resources