Question on a set describing transitive binary relation? - math

The image below shows a set that is supposed to describe a binary transitive relation:
That first arrow notation looks good at first until I saw the d node. I thought that since d cannot reach b (or any other node, yet it connects to c), it cannot be transitive?
A little bit of clarification would be great

The first panel is fine, i.e., it is transitive. Transtivity does not require d has a (directed) path to b in this case. Transitivity, by definition, requires "if there are x and y such that d → x and x → y, then it must be d → y". Since c (which potentially play the role of x here) does not go to anywhere, as for a chain of arrows that starts from d, there is no condition that needs to be satisfied (i.e., vacuously true, when starting from d).

Related

Is the concept of equal subsets a fallacy?

A definition of subset that I found on the internet(In this web, 2nd point 3th paragraph) and in a book(set theory related topics by lipschutz, Page 3 - Def. 1-1) says or implies that:
A=B If at the same time A⊂B and B⊂A;
This would imply that A is contained in B, but it would also imply that B is contained in A.
Wouldn't this be a fallacy as demonstrated in Russell's paradox?
I imagine it would be something like that, it's this right?
This Img
Unfortunately, "contains" can be used in two very different ways for a set: "contains as a member" and "contains as a subset", so I would suggest avoiding it or being very clear which one you mean. I think the second one is less common to use "contains" for, but it still happens.
It's true that A can't be a member of B when B is a member of A (but not really related to Russell's paradox); but A can be a subset of B and B a subset of A. Just consider A={1} and B={1}. Then every member of A (i.e. 1) is a member of B, so A is a subset of B. And vice versa.
I imagine it would be something like that, it's this right? This Img
This would be if B is a proper subset of A (that is, a subset of A but not equal to A) and A is a proper subset of B.
A set X is contained in set Y if every element of X is in Y.
So, even if X is equal to Y, X is considered to be contained in Y.
At this point, it will be clear that if X is contained in Y and Y is contained in X as well in the same time, they should be equal.

In Idris, why do interface parameters have to be type or data constructors?

To get some practice with Idris, I've been trying to represent various basic algebraic structures as interfaces. The way I thought of organizing things at first was to make the parameters of a given interface be the set and the various operations over it, and the methods/fields be proofs of the various axioms. For example, I was thinking of defining Group like so:
Group (G : Type) (op : G -> G -> G) (e : G) (inv : G -> G) where
assoc : {x,y,z : G} -> (x `op` y) `op z = x `op` (y `op` z)
id_l : {x : G} -> x `op` e = x
id_r : {x : G} -> x `op` e = x
inv_l : {x : G} -> x `op` (inv x) = e
inv_r : {x : G} -> (inv x) `op` x = e
My reasoning for doing it this way instead of just making op, e, and inv methods was that it would be easier to talk about the same set being a group in different ways. Like, mathematically, it doesn't make sense to talk about a set being a group; it only makes sense to talk about a set with a specified operation being a group. The same set can correspond to two completely different groups by defining the operation differently. On the other hand, the proofs of the various interface laws don't affect the group. While the inhabitants (proofs) of the laws may be different, it doesn't result in a different group. Thus, one would have no use for declaring multiple implementations.
More fundamentally, this approach seems like a better representation of the mathematical concepts. It's a category error to talk about a set being a group, so the mathematician in me isn't thrilled about asserting as much by making the group operation an interface method.
This scheme isn't possible, however. When I try, it actually does typecheck, but as soon as I try to define an instance, it doesn't: idris complains that e.g.:
(+) cannot be a parameter of Algebra.Group
(Implementation arguments must be type or data constructors)
My question is: why this restriction? I assume there's a good reason, but for the life of me I can't see it. Like, I thought Idris collapses the value/type/kind hierarchy, so there's no real difference between types and values, so why do implementations treat types specially? And why are data constructors treated specially? It seems arbitrary to me.
Now, I could just achieve the same thing using named implementations, which I guess I'll end up doing now. I guess I'm just used to Haskell, where you can only have one instance of a typeclass for a given datatype. But it still feels rather arbitrary.... In particular, I would like to be able to define, e.g., a semiring as a tuple (R,+,*,0,1) where (R,+,0) is a monoid and (R,*,1) is a monoid (with the distributivity laws tacked on). But I don't think I can do that very easily without the above scheme, even with named implementations. I could only say whether or not R is a monoid---but for semirings, it needs to be a monoid in two distinct ways! I'm sure there are workarounds with some boilerplate type synonyms or something (which, again I'll probably end up doing), but I don't really see why that should be necessary.
$ idris --version
1.2.0

Lens Laws: what are they trying to tell me?

I've seen various versions of the Lens Laws. Not sure if they're all intended to be equivalent, so for definiteness I'll use the version on StackOverflow against tag Lenses [Edward Kmett ~ 5 years ago]
(I'm asking because I want more of a handle on bi-directional programming.)
Using a as a structure, b as a component/value in the structure:
get (set b a) = b
Ok. What you get is what you've put. Seems essential for anything calling itself a data structure/container. I might have a slight q: where did the initial a come from? Could I go directly get a? What would that mean?
get (set b' (set b a)) = b'
?I believe this is intended to be telling me: what you get is what you last put (and anything you put before is lost forever). But it doesn't actually say that. It doesn't (for example) exclude that the lens is a stack-within-a -- i.e. get behaves like pop. So if I do a second get it might return the earlier b. IOW it needs to say: once you've set b' (whatever-a), get will always return b' ad infinitum.
This is law is sometimes written in the form: set b' (set b a) = set b' a. But I don't like that at all, which brings me to:
set (get a) a = a
Putting what you've already got does nothing. (That seems a barely interesting thing to say: doesn't it follow from Law 1?) But an equality test on the structure is breaking the abstraction. We (as clients of the structure) don't care how the structure organises itself internally. Our interface is in terms of the methods get, set. Putting what you've already got might change the structure's value for all we care -- just as long as a get returns that value we put.
If there's something crucial about the value/contents of set (get a) a, can't that be expressed in terms of get/set? And if it can't, why do we care?
All these laws are in terms of a single lens. So they would hold if the structure was merely a single 'slot' -- which seems a lot of machinery for something aka a 'variable'.
What does seem to be missing is anything about how you can combine distinct lenses to work through a more complex structure. Such that the structure allows each lens to work orthogonally. I believe there's a van Laarhoven law:
-- I need two lenses, so I'll use get', set' as well as get, set
get' (set b (set' b' a)) = b'
Do I not need a law like that? Please explain.
I haven't used this formalism before, so it might show in my answer.
This is a mathematical law. It doesn't matter where a comes from, as long as it satisfies the relevant mathematical requirements. In programming, you could define it in your code, load it from a file, parse it from network call, build it recursively; whatever.
It can't say that. get doesn't return a new structure, only the value; and the whole point of lenses is to create a framework for immutable, side-effect-free structures. Let a' be (set b' (set b a)); get a' will return b' each and every time, because a' can't change, and get is pure function; there is no place for state to be stored. Your "get will always return b' ad infinitum" is assumed to always be true in for a pure function, there is no need for a further stipulation. To implement a stack, you need one of two things: either for it to be mutable, side-effectful (so that get a == get a is not necessarily true), or the manipulation functions need to return the new stack - both put and get.
I failed to construct a counterexample for this, probably because it is intuitively so strong. Here is a very tenuous counterexample: Let get a (Container b _) = b, set b (Container b c) = Container b (c + 1), set b (Container b' _) = Container b 0. Furthermore, let a = Container b' 0 and b != b'. First law: get (set b a) = get (Container b 0) = b - OK. Second law: get (set b' (set b a)) = get (set b' (Container b 0)) = get (Container b' 0) = b' - OK. However, set (get a) a = set (get (Container b' 0)) (Container b' 0) = set b' (Container b') = Container b' 1 != a - not OK. Thus, it does not follow from the first law. Without this, you cannot test two structures for equality, and would instead need to iterate each accessor to prove that the two structures are equal (and as a JavaScript programmer, let me tell you: not having object identity function is really awkward).
You do. Imagine this: set b a = Container b, get (Container b) = b. First law: get (set b a) = get (Container b) = b - OK. Second law: get (set b' (set b a)) = get (set b' (Container b)) = get (Container b') = b' - OK; Third law: let a == Container b: set (get a) a = set (get (Container b)) a = set b a = Container b = a - OK. All three laws are satisfied with this very simple (and kind of obviously wrong) definition. Now let's add set' b a = Container' b, get' (Container' b) = b), and see what happens: get' (set b (set' b' a)) = get' (set b (Container' b')) = get' (Container b)... and this can't get evaluated. Oopsie. Or, imagine this: set b a = Container b 0, get (Container b _) = b, set' b a = Container b 1, get' (Container b _) = b. In this case, get' (set b (set' b' a)) = get' (set b (Container b' 1)) = get' (Container b 0) = b - not OK. This law guarantees that the values set by set' will be preserved in the structure even if we apply set (something that is definitely not the case in this example).
If you're coming at this with expectations of Bidirectional Programming, I'm not surprised you're puzzled, AntC.
The FAQ page Resources [1] cites Pierce and Voigtländer (both of whom state similar laws); but really they're working with a quite different semantics. For example the Pierce et al 2005 paper:
The get component of a lens corresponds exactly to a view definition. In order to support a compositional approach, we take the perspective that a view state is an entire database (rather than just a single relation, as in many treatments of views).
So these are 'views' in the database sense (and that leads to the punning on terms from optics). There's always a 'base' schema definition (data structure) to/from which the lenses transform the view. Both Pierce and Voigtländer are trying to manage the 'size and shape' of data structures, to preserve transformations between them. Then no wonder they're thinking that the 'base' is the chief holder of content, and the lenses merely mechanisms to look through.
With functional references, there's no such difficulty. Lenses focus on 'slots' in a single data structure. And the structure is treated as abstract.
If the functional references approach wants to lay down laws in terms of 'slots', and specifically treat what Pierce calls oblivious lenses, it has to take on the semantics for the parts of the structure outside the lens. What the Voigtländer et al 2012 paper deals with using the res (residue) function; or prior work on database views [Bancilhon & Spyratos 1981] calls the "complement".
The set function in the laws quoted in the O.P. is oblivious in the sense it ignores the current content of the 'slot' and overwrites it. Lenses aren't necessarily like that. Recent treatments use an upd function (with an additional parameter f for the update to apply to the current value). Note
get (upd f a) = f (get a).
get (set b a) = b = get (upd (const b) a). (Law 1)
Except that not all lens upd operations observe those equivalences. There is, for example, the neat trick of a lens that gets the day slot from a date; but whose upd (+ 1) increments the whole date. getDay (updDay (+ 1) (Date {year = 2017, month = 2, day = 28}) ) returns 1, not 29.
To the specific questions in the O.P.
"slight q": the initial a comes from a create function [Voigtländer]
and there's a bunch of create/get laws.
Or is already in the 'base' schema [Pierce].
#Amadan's answer is right on the money. get and set are pure functions.
get must return whatever was last set, by Law 1. So Law 2 seems pointless.
Is important to state for Pierce and Voigtländer because they regard the 'base' as critical.
For functional references (if the data structure is supposed to be held abstract),
stating this law is breaking the abstraction.
It is also failing to state the behaviour for other Lenses into the structure,
if set changes the value -- which is surely what Lens user wants to understand.
So again seems pointless.
Note neither Pierce's nor Voigtländer's approaches expect more than one Lens.
If two Lenses focus on independent slots within a structure, both of these hold:
-- continuing to use get'/set' as well as get/set
∀ b b' . get' (set b (set' b' a)) = b' -- law for set' inside set
∀ b' b . get (set' b' (set b a)) = b -- law for set inside set'
If two Lenses interfere/overlap, neither equation holds (in general for all values in the domain of set, set').
So to take the 'worst' but still plausible case from above of getDay/updDayfocussing inside getDate/setDate:
Law 1 holds for getDate/setDate; but updDay doesn't behave like a version of set.
Laws 2 & 3 hold (but seem pointless).
There's no laws we can usefully write about their interaction.
I guess the best we could do is segregate the Lenses that focus inside the same structure
into groupings that do/don't mutually interfere.
Overall I don't think the Lens Laws are much help in understanding Lenses as they are used nowadays.
[1] https://github.com/ekmett/lens/wiki/FAQ#lens-resources

prolog in math - searching the level of node in prolog

Assuming here is a binary search tree, and given the rule that above(X,Y) - X is directly above Y. Also I created the rule root(X) - X has no parent.
Then, I was trying to figure out what the depth of node in this tree.
Assume the root node of tree is "r" So I got fact level(r,0). In order to implement the rule level(N,D) :-, what I was thinking is it should be have a recursion here.
Thus, I tried
level(N,D): \+ root(N), above(X,N), D is D+1, level(X,D).
So if N is not a root, there has a node X above N and level D plus one, then recursion. But when I tested this, it just works for the root condition. When I created more facts, such as node "s" is leftchild of node "r", My query is level(s,D). It returns me "no". I traced the query, it shows me
1 1 Call: level(s,_16) ?
1 1 Fail: level(s,_16) ?
I just confusing why it fails when I call level(s,D)?
There are some problems with your query:
In Prolog you cannot write something like D is D+1, because a variable can only be assigned one value;
at the moment you call D is D+1, D is not yet instantiated, so it will probably cause an error; and
You never state (at least not in the visible code) that the level/2 of the root is 0.
A solution is to first state that the level of any root is 0:
level(N,0) :-
root(N).
Now we have to define the inductive case: first we indeed look for a parent using the above/2 predicate. Performing a check that N is no root/1 is not necessary strictly speaking, because it would conflict with the fact that there is an above/2. Next we determine the level of that parent LP and finally we calculate the level of our node by stating that L is LP+1 where L is the level of N and LP the level op P:
level(N,L) :-
above(P,N),
level(P,LP),
L is LP+1.
Or putting it all together:
level(N,0) :-
root(N).
level(N,L) :-
above(P,N),
level(P,LP),
L is LP+1.
Since you did not provide a sample tree, I have no means to test whether this predicate behaves as you expect it to.
About root/1
Note that by writing root/1, you introduce data duplication: you can simply write:
root(R) :-
\+ above(_,R).

Prolog Recursion - Satisfying Both Directions (Simple)

I am very new to Prolog and I was given this assignment.
My code is as follows:
relatives(cindy,tanya).
relatives(tanya,alan).
relatives(alan,mike).
relatives(kerry,jay).
relatives(jay,alan).
isRelated(X,Y):-
relatives(X,Y).
isRelated(X,Y):-
relatives(X,Z),
isRelated(Z,Y).
Simple enough. This shows that if:
?- isRelated(cindy,mike).
Prolog will return true. Now, I'm stuck on how to make it return true if:
?- isRelated(mike,cindy).
I've been trying to come up with ideas like if isRelated(Z,Y) returns false, then switch X and Y, and run isRelated again. But I'm not sure if Prolog even allows such an idea. Any hints or advice would be greatly appreciated. Thanks!
UPDATE:************************************
So I added:
isRelated(X,Y):-
relatives(X,Y);
relatives(Y,X).
That will satisfy "direct" relationships, but simply enough I found out that it doesn't satisfy indirect relationships.
I really want to do something like, if the initial query:
isRelated(mike,cindy)
fails, then try and see if the reverse is true by switching X and Y:
isRelated(cindy,mike)
That will definitely return true. I just don't know how to do this syntactically in Prolog.
Further hint to those in the comments, as I can't leave comments yet: With your original set of rules and facts,
isRelated(cindy,tanya) is true, but isRelated(tanya,cindy) is not, so you need to make isRelated(X,Y) symmetric; what simple addition to isRelated would achieve that?
Also, you could try drawing a graph of the relation relatives(X,Y), with an arrow from X to Y for all your base facts, and see if that helps you think about how the Prolog interpreter is going to attempt to satisfy a query.
So to answer your last question, you don't switch the values of X and Y in Prolog, like you would call swap(x,y) in C, say. The value held by a logic variable can not be changed explicitly, only back-tracked over. But you can easily use Y where you would use X, and vice versa:
somePred(X,Y):- is_it(X,Y).
somePred(X,Y):- is_it(Y,X).
This defines somePred predicate as a logical disjunction, an "OR". It can be written explicitly too, like
somePred(X,Y):- is_it(X,Y) ; is_it(Y,X).
Note the semicolon there. A comma , between predicates OTOH defines a conjunction, an "AND" (a comma inside a compound term just serves to delimit the term's "arguments").
YOu're almost there, you're just trying, I think, to cram too much stuff into one predicate.
Write the problem statement in English and work from that:
A relationship exists between two people, X and Y
if X and Y are directly related, or
if any direct relative of X, P, is related to Y.
Then it gets easy. I'd approach it like this:
First, you have your set of facts about relatives.
related( cindy, tanya ).
...
related( james, alan ).
Then, a predicate describing a direct relationship is terms of those facts:
directly_related( X , Y ) :- % a direct relationship exists
related(X,Y) % if X is related to Y
. % ... OR ...
directly_related( X , Y ) :- % a direct relationship exists
related(Y,X) % if Y is related to X
. %
Finally, a predicate describing any relationship:
is_related(X,Y) :- % a relationship exists between X and Y
directly_related(X,Y) % if a direct relationship exists between them
. % ... OR ...
is_related(X,Y) :- % a relationship exists between X and Y
directly_related(X,P) , % if a direct relationship exists between X and some other person P
is_related(P,Y) % and [recursively] a relationship exists between P and Y.
. %
The solution is actually more complicated than this:
The facts about relationships describe one or more graphs. More on graphs at http://web.cecs.pdx.edu/~sheard/course/Cs163/Doc/Graphs.html. What you're doing is finding a path from node X to Node Y in the graph.
If the graphs described by the facts about relationships have one or more paths between X and Y, the above solution can (and will) succeed multiple times (on backtracking), once for every such path. The solution needs to be deterministic. Normallly, having established that two people are related, we're done: just because I have two cousins doesn't mean I'm related to my aunt twice.
If the graph of relationships contains cycles (almost certainly true) such that a "circular" path exists: A → B → C → A …, the solution is susceptible to unlimited recursion. That means the solution needs to detect and deal with cycles. How might that be accomplished?

Resources