Clojure idiomatic way to update multiple values of map - dictionary

This is probably straightforward, but I just can't get over it.
I have a data structure that is a nested map, like this:
(def m {:1 {:1 2 :2 5 :3 10} :2 {:1 2 :2 50 :3 25} :3 {:1 42 :2 23 :3 4}})
I need to set every m[i][i]=0. This is simple in non-functional languages, but I cant make it work on Clojure. How is the idiomatic way to do so, considering that I do have a vector with every possible value? (let's call it v)
doing (map #(def m (assoc-in m [% %] 0)) v) will work, but using def inside a function on map doesn't seems right.
Making m into an atomic version and using swap! seems better. But not much It also seems to be REALLY slow.
(def am (atom m))
(map #(swap! am assoc-in[% %] 0) v)
What is the best/right way to do that?
UPDATE
Some great answers over here. I've posted a follow-up question here Clojure: iterate over map of sets that is close-related, but no so much, to this one.

You're right that it's bad form to use def inside a function. It's also bad form to use functions with side-effects (such as swap) inside map. Furthemore, map is lazy, so your map/swap attempt won't actually do anything unless it is forced with, e.g., dorun.
Now that we've covered what not to do, let's take a look at how to do it ;-).
There are several approaches you could take. Perhaps the easiest for someone coming from an imperative paradigm to start with is loop:
(defn update-m [m v]
(loop [v' v
m' m]
(if (empty? v')
;; then (we're done => return result)
m'
;; else (more to go => process next element)
(let [i (first v')]
(recur (rest v') ;; iterate over v
(assoc-in m' [i i] 0)))))) ;; accumulate result in m'
However, loop is a relatively low-level construct within the functional paradigm. Here we can note a pattern: we are looping over the elements of v and accumulating changes in m'. This pattern is captured by the reduce function:
(defn update-m [m v]
(reduce (fn [m' i]
(assoc-in m' [i i] 0)) ;; accumulate changes
m ;; initial-value
v)) ;; collection to loop over
This form is quite a bit shorter, because it doesn't need the boiler-plate code the loop form requires. The reduce form might not be as easy to read at first, but once you get used to functional code, it will become more natural.
Now that we have update-m, we can use it to transform maps in the program at large. For example, we can use it to swap! an atom. Following your example above:
(swap! am update-m v)

Using def inside a function would certainly not be recommended. The OO way to look at the problem is to look inside the data structure and modify the value. The functional way to look at the problem would be to build up a new data structure that represents the old one with values changed. So we could do something like this
(into {} (map (fn [[k v]] [k (assoc v k 0)]) m))
What we're doing here is mapping over m in much the same way you did in your first example. But then instead of modifying m we return a [] tuple of key and value. This list of tuples we can then put back into a map.

The other answers are fine, but for completeness here's a slightly shorter version using a for comprehension. I find it more readable but it's a matter of taste:
(into {} (for [[k v] m] {k (assoc v k 0)}))

Related

How to understand clojure's lazy-seq

I'm trying to understand clojure's lazy-seq operator, and the concept of lazy evaluation in general. I know the basic idea behind the concept: Evaluation of an expression is delayed until the value is needed.
In general, this is achievable in two ways:
at compile time using macros or special forms;
at runtime using lambda functions
With lazy evaluation techniques, it is possible to construct infinite data structures that are evaluated as consumed. These infinite sequences utilizes lambdas, closures and recursion. In clojure, these infinite data structures are generated using lazy-seq and cons forms.
I want to understand how lazy-seq does it's magic. I know it is actually a macro. Consider the following example.
(defn rep [n]
(lazy-seq (cons n (rep n))))
Here, the rep function returns a lazily-evaluated sequence of type LazySeq, which now can be transformed and consumed (thus evaluated) using the sequence API. This API provides functions take, map, filter and reduce.
In the expanded form, we can see how lambda is utilized to store the recipe for the cell without evaluating it immediately.
(defn rep [n]
(new clojure.lang.LazySeq (fn* [] (cons n (rep n)))))
But how does the sequence API actually work with LazySeq?
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
How is the intermediate operation map applied to the sequence,
how does take limit the sequence and
how does terminal operation reduce evaluate the sequence?
Also, how do these functions work with either a Vector or a LazySeq?
Also, is it possible to generate nested infinite data structures?: list containing lists, containing lists, containing lists... going infinitely wide and deep, evaluated as consumed with the sequence API?
And last question, is there any practical difference between this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
That's a lot of questions!
How does the seq API actually works with LazySeq?
If you take a look at LazySeq's class source code you will notice that it implements ISeq interface providing methods like first, more and next.
Functions like map, take and filter are built using lazy-seq (they produce lazy sequences) and first and rest (which in turn uses more) and that's how they can work with lazy seq as their input collection - by using first and more implementations of LazySeq class.
What actually happens in the following expression?
(reduce + (take 3 (map inc (rep 5))))
The key is to look how LazySeq.first works. It will invoke the wrapped function to obtain and memoize the result. In your case it will be the following code:
(cons n (rep n))
Thus it will be a cons cell with n as its value and another LazySeq instance (result of a recursive call to rep) as its rest part. It will become the realised value of this LazySeq object and first will return the value of the cached cons cell.
When you call more on it, it will in the same way ensure that the value of the particular LazySeq object is realised (or reused memoized value) and call more on it (in this case more on the cons cell containing another LazySeq object).
Once you obtain another instance of LazySeq object with more the story repeats when you call first on it.
map and take will create another lazy-seq that will call first and more of the collection passed as their argument (just another lazy seq) so it will be similar story. The difference will be only in how the values passed to cons are generated (e.g. calling f to a value obtained by first invoked on the LazySeq value mapped over in map instead of a raw value like n in your rep function).
With reduce it's a bit simpler as it will use loop with first and more to iterate over the input lazy seq and apply the reducing function to produce the final result.
As the actual implementation looks like for map and take I encourage you to check their source code - it's quite easy to follow.
How seq API can works with different collection types (e.g. lazy seq and persistent vector)?
As mentioned above, map, take and other functions work in terms of first and rest (reminder - rest is implemented on top of more). Thus we need to explain how first and rest/more can work with different collection types: they check if the collection implements ISeq (and then it implement those functions directly) or they try to create a seq view of the collection and coll its implementation of first and more.
Is it possible to generate nested infinite data structures?
It's definitely possible but I am not sure what the exact data shape you would like to get. Do you mean getting a lazy seq which generates another sequence as it's value (instead of a single value like n in your rep) but returns it as a flat sequence?
(defn nested-cons [n]
(lazy-seq (cons (repeat n n) (nested-cons (inc n)))))
(take 3 (nested-cons 1))
;; => ((1) (2 2) (3 3 3))
that would rather return (1 2 2 3 3 3)?
For such cases you might use concat instead of cons which creates a lazy sequence of two or more sequences:
(defn nested-concat [n]
(lazy-seq (concat (repeat n n) (nested-concat (inc n)))))
(take 6 (nested-concat 1))
;; => (1 2 2 3 3 3)
Is there any practical difference with this
(defn rep [n]
(lazy-seq (cons n (rep n))))
and this?
(defn rep [n]
(cons n (lazy-seq (rep n))))
In this particular case not really. But in the case where a cons cell doesn't wrap a raw value but a result of a function call to calculate it, the latter form is not fully lazy. For example:
(defn calculate-sth [n]
(println "Calculating" n)
n)
(defn rep1 [n]
(lazy-seq (cons (calculate-sth n) (rep1 (inc n)))))
(defn rep2 [n]
(cons (calculate-sth n) (lazy-seq (rep2 (inc n)))))
(take 0 (rep1 1))
;; => ()
(take 0 (rep2 1))
;; Prints: Calculating 1
;; => ()
Thus the latter form will evaluate its first element even if you might not need it.

Understanding shift/reset in Racket

I present two naive implementations of foldr in racket
This first one lacks a proper tail call and is problematic for large values of xs
(define (foldr1 f y xs)
(if (empty? xs)
y
(f (car xs) (foldr1 f y (cdr xs)))))
(foldr1 list 0 '(1 2 3))
; => (1 (2 (3 0))
This second one uses an auxiliary function with a continuation to achieve a proper tail call making it safe for use with large values of xs
(define (foldr2 f y xs)
(define (aux k xs)
(if (empty? xs)
(k y)
(aux (lambda (rest) (k (f (car xs) rest))) (cdr xs))))
(aux identity xs))
(foldr2 list 0 '(1 2 3))
; => (1 (2 (3 0)))
Looking at racket/control I see that racket supports first-class continuations. I was wondering if it was possible/beneficial to express the second implementation of foldr using shift and reset. I was playing around with it for a little while and my brain just ended up turning inside out.
Please provide thorough explanation with any answer. I'm looking for big-picture understanding here.
Disclaimers:
The “problem” of foldr you are trying to solve is actually its main feature.
Fundamentally, you cannot process a list in reverse easily and the best you can do is reverse it first. Your solution with a lambda, in its essence, is no different from a recursion, it’s just that instead of accumulating recursive calls on the stack you are accumulating them explicitly in many lambdas, so the only gain is that instead of being limited by the stack size, you can go as deep as much memory you have, since lambdas are, likely, allocated on the heap, and the trade off is that you now perform dynamic memory allocations/deallocations for each “recursive call”.
Now, with that out of the way, to the actual answer.
Let’s try to implement foldr keeping in mind that we can work with continuations. Here is my first attempt:
(define (foldr3 f y xs)
(if (empty? xs)
y
(reset
(f (car xs) (shift k (k (foldr3 f y (cdr xs))))))))
; ^ Set a marker here.
; ^ Ok, so we want to call `f`.
; ^ But we don’t have a value to pass as the second argument yet.
; Let’s just pause the computation, wrap it into `k` to use later...
; And then resume it with the result of computing the fold over the tail.
If you look closely at this code, you will realise, that it is exactly the same as your foldr – even though we “pause” the computation, we then immediately resume it and pass the result of a recursive call to it, and this construction is, of course, not tail recursive.
Ok, then it looks like we need to make sure that we do not resume it immediately, but rather perform the recursive computation first, and then resume the paused computation with the recursive computation result. Let’s rework our function to accept a continuation and call it once it has actually computed the value that it needs.
(define (foldr4 f y xs)
(define (aux k xs)
(if (empty? xs)
(k y)
(reset
(k (f (car xs) (shift k2 (aux k2 (cdr xs))))))))
(reset (shift k (aux k xs))))
The logic here is similar to the previous version: in the non-trivial branch of the if we set a reset marker, and then start computing the expression as if we had everything we need; however, in reality, we do not have the result for the tail of the list yet, so we pause the computation, “package it” into k2, and perform a (this time tail-) recursive call saying “hey, when you’ve got your result, resume this paused computation”.
If you analyse how this code is executed, you’ll see that there is absolutely no magic in it and it works simply by “wrapping” continuations one into another while it traverses the list, and then, once it reaches the end, the continuations are “unwrapped” and executed in the reverse order one by one. In fact, this function does exactly the same as what foldr2 does – the difference is merely syntactical: instead of creating explicit lambdas, the reset/shift pattern allows us to start writing out the expression right away and then at some point say “hold on a second, I don’t have this value yet, let’s pause here and return later”... but under the hood it ends up creating the same closure that lambda would!
I believe, there is no way to do better than this with lists.
Another disclaimer: I don’t have a working Scheme/Racket interpreter with reset/shift implemented, so I didn’t test the functions.

Limit the recursion level in a Clojure repl

While working in the repl is there a way to specify the maximum times to recur before the repl will automatically end the evaluation of an expression. As an example, suppose the following function:
(defn looping []
(loop [n 1]
(recur (inc n))))
(looping)
Is there a way to instruct the repl to give up after 100 levels of recursion? Something similar to print-level.
I respectfully hope that I'm not ignoring the spirit of your question, but why not simply use a when expression? It's nice and succinct and wouldn't change the body of your function much at all (1 extra line and a closing paren).
Whilst I don't believe what you want exists, it would be trivial to implement your own:
(def ^:dynamic *recur-limit* 100)
(defn looping []
(loop [n 1]
(when (< n *recur-limit*)
(recur (inc n)))))
Presumably this hasn't been added to the language because it's easy to construct what you need with the existing language primitives; apart from that, if the facility did exist but was 'invisible', it could cause an awful lot of confusion and bugs because code wouldn't always behave in a predictable and referentially transparent manner.

Accumulators, conj and recursion

I've solved 45 problems from 4clojure.com and I noticed a recurring problem in the way I try to solve some problems using recursion and accumulators.
I'll try to explain the best I can what I'm doing to end up with fugly solutions hoping that some Clojurers would "get" what I'm not getting.
For example, problem 34 asks to write a function (without using range) taking two integers as arguments and creates a range (without using range). Simply put you do (... 1 7) and you get (1 2 3 4 5 6).
Now this question is not about solving this particular problem.
What if I want to solve this using recursion and an accumulator?
My thought process goes like this:
I need to write a function taking two arguments, I start with (fn [x y] )
I'll need to recurse and I'll need to keep track of a list, I'll use an accumulator, so I write a 2nd function inside the first one taking an additional argument:
(fn
[x y]
((fn g [x y acc] ...)
x
y
'())
(apparently I can't properly format that Clojure code on SO!?)
Here I'm already not sure I'm doing it correctly: the first function must take exactly two integer arguments (not my call) and I'm not sure: if I want to use an accumulator, can I use an accumulator without creating a nested function?
Then I want to conj, but I cannot do:
(conj 0 1)
so I do weird things to make sure I've got a sequence first and I end up with this:
(fn
[x y]
((fn g [x y acc] (if (= x y) y (conj (conj acc (g (inc x) y acc)) x)))
x
y
'()))
But then this produce this:
(1 (2 (3 4)))
Instead of this:
(1 2 3 4)
So I end up doing an additional flatten and it works but it is totally ugly.
I'm beginning to understand a few things and I'm even starting, in some cases, to "think" in a more clojuresque way but I've got a problem writing the solution.
For example here I decided:
to use an accumulator
to recurse by incrementing x until it reaches y
But I end up with the monstrosity above.
There are a lot of way to solve this problem and, once again, it's not what I'm after.
What I'm after is how, after I decided to cons/conj, use an accumulator, and recurse, I can end up with this (not written by me):
#(loop [i %1
acc nil]
(if (<= %2 i)
(reverse acc)
(recur (inc i) (cons i acc))))
Instead of this:
((fn
f
[x y]
(flatten
((fn
g
[x y acc]
(if (= x y) acc (conj (conj acc (g (inc x) y acc)) x)))
x
y
'())))
1
4)
I take it's a start to be able to solve a few problems but I'm a bit disappointed by the ugly solutions I tend to produce...
i think there are a couple of things to learn here.
first, a kind of general rule - recursive functions typically have a natural order, and adding an accumulator reverses that. you can see that because when a "normal" (without accumulator) recursive function runs, it does some work to calculate a value, then recurses to generate the tail of the list, finally ending with an empty list. in contrast, with an accumulator, you start with the empty list and add things to the front - it's growing in the other direction.
so typically, when you add an accumulator, you get a reversed order.
now often this doesn't matter. for example, if you're generating not a sequence but a value that is the repeated application of a commutative operator (like addition or multiplication). then you get the same answer either way.
but in your case, it is going to matter. you're going to get the list backwards:
(defn my-range-0 [lo hi] ; normal recursive solution
(if (= lo hi)
nil
(cons lo (my-range-0 (inc lo) hi))))
(deftest test-my-range-1
(is (= '(0 1 2) (my-range-0 0 3))))
(defn my-range-1 ; with an accumulator
([lo hi] (my-range-1 lo hi nil))
([lo hi acc]
(if (= lo hi)
acc
(recur (inc lo) hi (cons lo acc)))))
(deftest test-my-range-1
(is (= '(2 1 0) (my-range-1 0 3)))) ; oops! backwards!
and often the best you can do to fix this is just reverse that list at the end.
but here there's an alternative - we can actually do the work backwards. instead of incrementing the low limit you can decrement the high limit:
(defn my-range-2
([lo hi] (my-range-2 lo hi nil))
([lo hi acc]
(if (= lo hi)
acc
(let [hi (dec hi)]
(recur lo hi (cons hi acc))))))
(deftest test-my-range-2
(is (= '(0 1 2) (my-range-2 0 3)))) ; back to the original order
[note - there's another way of reversing things below; i didn't structure my argument very well]
second, as you can see in my-range-1 and my-range-2, a nice way of writing a function with an accumulator is as a function with two different sets of arguments. that gives you a very clean (imho) implementation without the need for nested functions.
also you have some more general questions about sequences, conj and the like. here clojure is kind-of messy, but also useful. above i've been giving a very traditional view with cons based lists. but clojure encourages you to use other sequences. and unlike cons lists, vectors grow to the right, not the left. so another way to reverse that result is to use a vector:
(defn my-range-3 ; this looks like my-range-1
([lo hi] (my-range-3 lo hi []))
([lo hi acc]
(if (= lo hi)
acc
(recur (inc lo) hi (conj acc lo)))))
(deftest test-my-range-3 ; except that it works right!
(is (= [0 1 2] (my-range-3 0 3))))
here conj is adding to the right. i didn't use conj in my-range-1, so here it is re-written to be clearer:
(defn my-range-4 ; my-range-1 written using conj instead of cons
  ([lo hi] (my-range-4 lo hi nil))
  ([lo hi acc]
    (if (= lo hi)
      acc
      (recur (inc lo) hi (conj acc lo)))))
(deftest test-my-range-4
(is (= '(2 1 0) (my-range-4 0 3))))
note that this code looks very similar to my-range-3 but the result is backwards because we're starting with an empty list, not an empty vector. in both cases, conj adds the new element in the "natural" position. for a vector that's to the right, but for a list it's to the left.
and it just occurred to me that you may not really understand what a list is. basically a cons creates a box containing two things (its arguments). the first is the contents and the second is the rest of the list. so the list (1 2 3) is basically (cons 1 (cons 2 (cons 3 nil))). in contrast, the vector [1 2 3] works more like an array (although i think it's implemented using a tree).
so conj is a bit confusing because the way it works depends on the first argument. for a list, it calls cons and so adds things to the left. but for a vector it extends the array(-like thing) to the right. also, note that conj takes an existing sequence as first arg, and thing to add as second, while cons is the reverse (thing to add comes first).
all the above code available at https://github.com/andrewcooke/clojure-lab
update: i rewrote the tests so that the expected result is a quoted list in the cases where the code generates a list. = will compare lists and vectors and return true if the content is the same, but making it explicit shows more clearly what you're actually getting in each case. note that '(0 1 2) with a ' in front is just like (list 0 1 2) - the ' stops the list from being evaluated (without it, 0 would be treated as a command).
After reading all that, I'm still not sure why you'd need an accumulator.
((fn r [a b]
(if (<= a b)
(cons a (r (inc a) b))))
2 4)
=> (2 3 4)
seems like a pretty intuitive recursive solution. the only thing I'd change in "real" code is to use lazy-seq so that you won't run out of stack for large ranges.
how I got to that solution:
When you're thinking of using recursion, I find it helps to try and state the problem with the fewest possible terms you can think up, and try to hand off as much "work" to the recursion itself.
In particular, if you suspect you can drop one or more arguments/variables, that is usually the way to go - at least if you want the code to be easy to understand and debug; sometimes you end up compromising simplicity in favor of execution speed or reducing memory usage.
In this case, what I thought when I started writing was: "the first argument to the function is also the start element of the range, and the last argument is the last element". Recursive thinking is something you kind of have to train yourself to do, but a fairly obvious solution then is to say: a range [a, b] is a sequence starting with element a followed by a range of [a + 1, b]. So ranges can indeed be described recursively. The code I wrote is pretty much a direct implementation of that idea.
addendum:
I've found that when writing functional code, accumulators (and indexes) are best avoided. Some problems require them, but if you can find a way to get rid of them, you're usually better off if you do.
addendum 2:
Regarding recursive functions and lists/sequences, the most useful way to think when writing that kind of code is to state your problem in terms of "the first item (head) of a list" and "the rest of the list (tail)".
I cannot add to the already good answers you have received, but I will answer in general. As you go through the Clojure learning process, you may find that many but not all solutions can be solved using Clojure built-ins, like map and also thinking of problems in terms of sequences. This doesn't mean you should not solve things recursively, but you will hear -- and I believe it to be wise advice -- that Clojure recursion is for solving very low level problems you cannot solve another way.
I happen to do a lot of .csv file processing, and recently received a comment that nth creates dependencies. It does, and use of maps can allow me to get at elements for comparison by name and not position.
I'm not going to throw out the code that uses nth with clojure-csv parsed data in two small applications already in production. But I'm going to think about things in a more sequency way the next time.
It is difficult to learn from books that talk about vectors and nth, loop .. recur and so on, and then realize learning Clojure grows you forward from there.
One of the things I have found that is good about learning Clojure, is the community is respectful and helpful. After all, they're helping someone whose first learning language was Fortran IV on a CDC Cyber with punch cards, and whose first commercial programming language was PL/I.
If I solved this using an accumulator I would do something like:
user=> (defn my-range [lb up c]
(if (= lb up)
c
(recur (inc lb) up (conj c lb))))
#'user/my-range
then call it with
#(my-range % %2 [])
Of course, I'd use letfn or something to get around not having defn available.
So yes, you do need an inner function to use the accumulator approach.
My thought process is that once I'm done the answer I want to return will be in the accumulator. (That contrasts with your solution, where you do a lot of work on finding the ending-condition.) So I look for my ending-condition and if I've reached it, I return the accumulator. Otherwise I tack on the next item to the accumulator and recur for a smaller case. So there are only 2 things to figure out, what the end-condition is, and what I want to put in the accumulator.
Using a vector helps a lot because conj will append to it and there's no need to use reverse.
I'm on 4clojure too, btw. I've been busy so I've fallen behind lately.
It looks like your question is more about "how to learn" then a technical/code problem. You end up writing that kind of code because from whatever way or source you learned programming in general or Clojure in specific has created a "neural highway" in your brain that makes you thinking about the solutions in this particular way and you end up writing code like this. Basically whenever you face any problem (in this particular case recursion and/or accumulation) you end up using that "neural highway" and always come up with that kind of code .
The solution for getting rid of this "neural highway" is to stop writing code for the moment, keep that keyboard away and start reading a lot of existing clojure code (from existing solutions of 4clojure problem to open source projects on github) and think about it deeply (even read a function 2-3 times to really let it settle down in your brain). This way you would end up destroying your existing "neural highway" (which produce the code that you write now) and will create a new "neural highway" that would produce the beautiful and idiomatic Clojure code. Also, try not to jump to typing code as soon as you saw a problem, rather give yourself some time to think clearly and deeply about the problem and solutions.

A bi-directional map in clojure?

What would be the best way to implement a bi-directional map in clojure? (By bi-directional map, I mean an associative map which can provide both A->B and B->A access. So in effect, the values themselves would be keys for going in the opposite direction.)
I suppose I could set up two maps, one in each direction, but is there a more idiomatic way of doing this?
I'm interested both in cases where we want a bijection, implying that no two keys could map to the same value, and cases where that condition isn't imposed.
You could always use a Java library for this, like one of the collections in Apache commons. TreeBidiMap implements java.util.Map so it's even seq-able without any effort.
user> (def x (org.apache.commons.collections.bidimap.TreeBidiMap.))
#'user/x
user> (.put x :foo :bar)
nil
user> (keys x)
(:foo)
user> (.getKey x :bar)
:foo
user> (:foo x)
:bar
user> (map (fn [[k v]] (str k ", " v)) x)
(":foo, :bar")
Some things won't work though, like assoc and dissoc, since they expect persistent collections and TreeBidiMap is mutable.
If you really want to do this in native Clojure, you could use metadata to hold the reverse-direction hash. This is still going to double your memory requirements and double the time for every add and delete, but lookups will be fast enough and at least everything is bundled.
(defn make-bidi []
(with-meta {} {}))
(defn assoc-bidi [h k v]
(vary-meta (assoc h k v)
assoc v k))
(defn dissoc-bidi [h k]
(let [v (h k)]
(vary-meta (dissoc h k)
dissoc v)))
(defn getkey [h v]
((meta h) v))
You'd probably have to implement a bunch of other functions to get full functionality of course. Not sure how feasible this approach is.
user> (def x (assoc-bidi (make-bidi) :foo :bar))
#'user/x
user> (:foo x)
:bar
user> (getkey x :bar)
:foo
For most cases, I've found the following works fine:
(defn- bimap
[a-map]
(merge a-map (clojure.set/map-invert a-map)))
This will simply merge the original map with the inverted map into a new map, then you can use regular map operations with the result.
It's quick and dirty, but obviously you should avoid this implementation if any of the keys might be the same as any of the values or if the values in a-map aren't unique.
We can reframe this problem as follows:
Given a list of tuples ([a1 b1] [a2 b2] ...) we want to be able to quickly lookup tuples by both a and b. So we want mappings a -> [a b] and b -> [a b]. Which means that at least tuples [a b] could be shared. And if we think about implementing it then it quickly becomes apparent that this is an in-memory database table with columns A and B that is indexed by both columns.
So you can just use a lightweight in-memory database.
Sorry, I'm not familiar with Java Platform enough to recommend something specific. Of cause Datomic comes to mind, but it could be an overkill for this particular use case. On the other hand it has immutability baked in which seems desirable to you based on your comment to Brian's answer.

Resources