Alloy specifications - software-design

I'm a beginner learning Alloy, I want to know what means this n.^address (maybe with an example)? because logically if we consider address as a set of pairs like (A0,A1), then how we can join n which is for example N0 with this set?! since the left most element of pairs in address are not the same nature with n, I supposed that it would not possible.
I'd really appreciate if anyone can guide me

It's been a while since I used alloy, but the ^ operator represents the transitive closure of its operand relation. So if address is {(a,b), (b,c)} then ^address is {(a,b), (b,c), (a,c)}.
n.^address is the projection of the new relation on n.
So if n is a, then n.^address is {b,c}
Example:
abstract sig atom{
address: lone atom
}
one sig a,b,c extends atom{}
fact {
address = a->b + b->c
}
check {
a.^address = b+c
}

You ask "what means this n.^address?"
The expression n.^address is a join between the set of tuples denoted by n and the set of tuples denoted by ^address.
The expression ^address, in turn, denotes the transitive closure of the relation address, i.e. the smallest relation containing address which is transitive.
Whether there is in fact, or can be in principle, any tuple in n whose rightmost value is the same as the leftmost value of some tuple in ^address -- or, said another way, whether the expression n.^address is guaranteed to denote the empty set or not -- depends partly on how the variable n and the relation address are defined and partly on how the universe is populated. The same is true for whether the transitive closure of address is the same as address or a larger relation.
If N0, A0, and A1 are all atoms, and if the relation address contains only the pair (A0, A1), and the expression n denotes (the singleton set containing) the atom N0, then indeed the expression n.^address will denote the empty set. If on the other hand address contains a tuple (N0, A0) as well as the tuple (A0, A1), then
the expression address denotes the singleton set containing the tuple (A0, A1),
the expression ^address also denotes the singleton set containing the tuple (A0, A1), and
the expression n.^address denotes the singleton set containing the tuple (N0, A1), because the join of the set {(N0, A0)} with the set {(A0, A1)} is the set {(N0, A1)}.
Since you don't provide any more information about the Alloy model you have in mind, it's not possible to say much more.

Related

Why "Algebraic data type" use "Algebraic" in the name?

When I learn Scala/Haskell, I see there is a concept of Algebraic data type. I've read the explanation from the wikipedia, but I still have a question:
Why does it use the word "Algebraic" in its name? Does it have some relationship with "Algebraic"?
Consider the type Bool. This type, of course, can take on one of two possible values: True or False.
Now consider
data EitherBool = Left Bool | Right Bool
How many values can this type take on? There are 4: Left False, Left True, Right False, Right True. How about
data EitherBoolInt = Left Bool | Right Int8
Here there are 2 possible values in the Left branch, and 2^8 in the Right branch. For a total of 2 + 2^8 possible values for EitherBoolInt. It should be easy to see that for any set of constructors and types, this kind of construction will give you a datatype with a space of possible values the size of the sum of the possible values of each individual constructor. For this reason, it's called a sum type.
Consider instead
data BoolAndInt = BAndI Bool Int8
or simply
type BoolAndInt = (Bool, Int)
How many values can this take on? For each possible Int8, there are two BoolAndInts, for a total of 2*2^8 = 2^9 total values. The total number of possible values is the product of the number of values of each field of the constructor, so this is called a product type.
This idea can be extended further -- for example, functions from a->b are an exponential datatype (see The Algebra of Algebraic Datatypes). You can even create a reasonable notion of the derivative of a datatype. This is not even a purely theoretical idea -- it's the basis for the functional construct of "zippers". See The Derivative of a Datatype is the Type of its One-Hole Contexts and The Wikipedia entry on zippers.
In simple words we must consider here relationship between algebra and types. Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory.
Wikipedia says:
In computer programming, particularly functional programming and type
theory, an algebraic data type is a kind of composite type, i.e. a
type formed by combining other types.
Let's take Maybe a data type:
data Maybe a = Nothing | Just a
Maybe a indicates that it might contain something of type a - Just Int for example, but also can be empty - Nothing. In haskell types are objects, for example Int. Operators gets types and produces new types, for example Maybe Int. Algebraic refers to the property that an Algebraic Data Type is created by algebraic operations: sums and product where:
"sum" is alternation (A | B, meaning A or B but not both)
"product" is combination (A B, meaning A and B together)
For example, let's see sum for Maybe a. For the start let's define Add type:
data Add a b = Left a | Right b
In haskell | is or, so it can be or Left a or Right b. Vertical bar | shows us that Maybe which we defined above is a sum type, it means that we can write it with Add:
type Maybe a = Add Nothing (Just a)
Nothing here is here is a unit type:
In the area of mathematical logic and computer science known as type
theory, a unit type is a type that allows only one value
data Unit = Unit
Or () in haskell.
Just a is a singleton type as. Singleton types are those types which have only one value.
data Just a = Just a
After it we can rewrite it as:
type Maybe a = Add () a
So we have unit type - 1, and singleton type which is - a. Now we can say that Maybe a is the same as 1 + a.
If you want to go deep - The Algebra of Data, and the Calculus of Mutation
Related question: https://math.stackexchange.com/questions/50375/whats-the-meaning-of-algebraic-data-type
My answer (there): it's all about algebraic theories.

Using "find_theorems" in Isabelle

I want to find theorems. I have read the section on find_theorems in the Isabelle/Isar reference manual:
find_theorems criteria
Retrieves facts from the theory or proof context matching all of given search
criteria. The criterion name: p selects all theorems whose fully qualified
name matches pattern p, which may contain "*" wildcards. The criteria intro,
elim, and dest select theorems that match the current goal as introduction,
elimination or destruction rules, respectively. The criterion solves returns
all rules that would directly solve the current goal. The criterion simp: t
selects all rewrite rules whose left-hand side matches the given term. The
criterion term t selects all theorems that contain the pattern t -- as usual,
patterns may contain occurrences of the dummy "_" , schematic variables, and
type constraints.
Criteria can be preceded by "-" to select theorems that do not match. Note
that giving the empty list of criteria yields all currently known facts. An
optional limit for the number of printed facts may be given; the default is 40.
By default, duplicates are removed from the search result. Use with_dups to
display duplicates.
As far as I understand, find_theorems is used in the find window of Isabelle/jEdit. The above does not help me finding relevant theorems for the following situation (Lambda is a theory of the Nominal Isabelle extension. The tarball is here):
theory First
imports Lambda
begin
theorem "Lam [x].(Lam [y].(App (Var x)(Var y))) = Lam [y].(Lam [x].(App (Var y)(Var x)))"
When I try the search expression Lam Isabelle/jedit says
Inner syntax error: unexpected end of input
Failed to parse term
How can I make it look for all the theorems that contain the constant Lam?
Since Lam like the ordinary lambda (%) is not a term on its own, you should add the remaining parts to get a proper term, which may contain wildcards. In your example, I would perform
find_theorems "Lam [_]. _"
which gives lots of answers.
Typically this happens whenever special syntax was defined for some constant. But there is (almost) always an underlying ("raw") constant. To find out which constant provides the Lam [_]. _ syntax. You can Ctrl-click Lam (inside a proper term) within Isabelle/jEdit. This will jump to the definition of the underlying constant.
For Lam there is the additional complication that the binder syntax uses exactly the same string as the underlying constant, namely Lam, as can be seen at the place of definition:
nominal_datatype lam =
Var "name"
| App "lam" "lam"
| Lam x::"name" l::"lam" binds x in l ("Lam [_]. _" [100, 100] 100)
In such cases you can use the long name of the constant by prefixing it with the theory name, i.e., Lambda.Lam.
Note: The same works for binders like ALL x. P x (with underlying constant All), but not for the built-in %x. x.

How is a table like a mathematical relation? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have been recently been reviewing Codd's relational algebra and relational databases. I recall that a relation is a set of ordered tuples and that a function is a relation that satisfies the additional property that each point in the domain must map to a single point in the codomain. In this sense, each table defines a finite-point function from the primary key onto the space of the codomain, defined by all the other columns. Is this the sense in which it is a relation? If so, why is relational algebra not functional algebra and why not call it a functional database instead?
Thanks.
BTW, sorry if this is not quite a normal form for stackoverflow (hah, a DB joke!) but I looked at all the forums and this seemed the best.
Well, there is C.J. Date's "An Introduction to Database Systems", and H. Darwen's "An Introduction to Relational Database Theory". Both are excellent books and I highly recommmend to read them both.
Now to the actual question. In mathematics, if you have n sets A1, A2, ..., An, you can form their Cartesian product A1 x A2 x ... x An, which is a set of all possible n-tuples (a1, a2, ..., an), where ai is an element from Ai. A n-ary relation R is, by definition, a subset of the Cartesian product of n sets.
Functions are binary relations — they're are subsets of Dom x Cod. But there're relations with higher arity. For example, if we take set Humans x Humans x Humans, we can define, say, a relation R, by taking all tuples (x, y, z) where x and y are parents of z.
Now there is a one important notion from logic: predicate. A predicate is a map from a Cartesian set A1 x A2 x ... x An to set of statements. Let's look at the predicate P(x,y,z) = "x and y are parents of z". For each tuple (x,y,z) from Humans x Humans x Humans we obtain a statement from it, true or false. And the set of all tuples which give us true statements, the predicate's truth set, is... a relation!
And notice, that having a truth set is all we actually need to work with a predicate. So, when we model our enterprise, we invent a bunch of predicates which describe it, and store their truth sets in the relational database.
And so, each operation with relations has a corresponding operation with predicates, so when we take relations, join and project and filter them, we end up with a new relation — and we know what predicate's its truth set is: we just take the corresponding predicates, and AND them, and bound with existential quantifiers, and we get a new predicate, whose truth set we know.
Edit: Now, I have to note that since relation is a set, its tuples are not ordered. So a table is just a model for a relation: you can take to different tables which will represent the same relation. Also, it is customary in relational theory to work with more generally defined tuples and Cartesian products. I defined higher the tuple as (a1, a2, ..., an) — basically, a function from {1,2,...,n} to A1 U A2 U ... U An (where i's image must be in Ai). In relational theory, we take a tuple to be a function from { name, name', ..., name } to A1 U A2 U ... U An — so, it becomes a record, a tuple with named components. And of course, it means that record's components are not ordered: (x: 1, y: 2), a function from { "x", "y" } to N which maps x to 1 and y to 2, is the same tuple/record as (y:2, x: 1).
So, if you take a table, swap rows, swap columns (with their headers!), you end up with a new table, which represent the same relation.
This Wikipedia page goes into detail about the rationale behind the model. Conceptually, the key is just a means of accessing a given tuple, not part of the tuple itself--see also Codd's 12 rules, #2.

regular languages with concatenations

the regular languages are closed under the operation:
init(L) = the set of the strings w such that for some x, wx is in L.
EDIT :
x can be any string, a character or empty string
How can I prove that ?
OK, misread the quesion on the first time, now I get it. It still trivial. Looking at the automate what you searching is a partion of the automate into two state sets S1 and S2, so that just one transition is between them (and if its from S1->S2 S1 contains of course the start node, and S2 the end node). Such exist always (exception empty language), in case there is no such node you can add one, so w is just a set containing the empty word, which is of course also regular (as well as the empty language case).
Unless I'm misunderstanding, the answer is that you can't. Because it's not true.
First, let's consider the language L = {aa, bb, cc} and alphabet {a, b, c}
So, init(L) = {a, b, c}. However, each of the elemnts in init(L) are not in L.
Edit: If we are concatenating empty characters, then init(L) = {a, b, c, aa, bb, cc}. Which is still not equal to L.
A language is regular iff there's a finite-state automaton that recognizes it. So, suppose L is a regular language and let A be an automaton that recognizes it. Now, say that a state of A is "good" if there's some set of possible transitions starting there and ending in the "accept" state. Define a new automaton A' in which all transitions to "good" states are replaced by direct transitions to the accept state. Then the language recognized by A' is exactly init(L).
I think it's a new DFA B, that makes all the state of A(the original DFA) that can reach to the final states of A the final state of B.

What is the difference between equality and equivalence?

I've read a few instances in reading mathematics and computer science that use the equivalence symbol ≡, (basically an '=' with three lines) and it always makes sense to me to read this as if it were equality. What is the difference between these two concepts?
Wikipedia: Equivalence relation:
In mathematics, an equivalence
relation is a binary relation between
two elements of a set which groups
them together as being "equivalent" in
some way. Let a, b, and c be arbitrary
elements of some set X. Then "a ~ b"
or "a ≡ b" denotes that a is
equivalent to b.
An equivalence relation "~" is reflexive, symmetric, and transitive.
In other words, = is just an instance of equivalence relation.
Edit: This seemingly simple criteria of being reflexive, symmetric, and transitive are not always trivial. See Bloch's Effective Java 2nd ed p. 35 for example,
public final class CaseInsensitiveString {
...
// broken
#Override public boolean equals(Object o) {
if (o instance of CaseInsensitiveString)
return s.equalsIgnoreCase(
((CaseInsensitiveString) o).s);
if (o instanceof String) // One-way interoperability!
return s.equalsIgnoreCase((String) o);
return false;
}
}
The above equals implementation breaks the symmetry because CaseInsensitiveString knows about String class, but the String class doesn't know about CaseInsensitiveString.
I take your question to be about math notation rather than programming. The triple equal sign you refer to can be written ≡ in HTML or \equiv in LaTeX.
a ≡ b most commonly means "a is defined to be b" or "let a be equal to b".
So 2+2=4 but φ ≡ (1+sqrt(5))/2.
Here's a handy equivalence table:
Mathematicians Computer scientists
-------------- -------------------
= ==
≡ =
(The other answers about equivalence relations are correct too but I don't think those are as common. There's also a ≡ b (mod m) which is pronounced "a is congruent to b, mod m" and in programmer parlance would be expressed as mod(a,m) == mod(b,m). In other words, a and b are equal after mod'ing by m.)
A lot of languages distinguish between equality of the objects and equality of the values of those objects.
Ruby for example has 3 different ways to test equality. The first, equal?, compares two variables to see if they point to the same instance. This is equivalent in a C-style language of doing a check to see if 2 pointers refer to the same address. The second method, ==, tests value equality. So 3 == 3.0 would be true in this case. The third, eql?, compares both value and class type.
Lisp also has different concepts of equality depending on what you're trying to test.
In languages that I have seen that differentiate between equality and equivalence, equality usually means the type and value are the same while equivalence means that just the values are the same. For example:
int i = 3;
double d = 3.0;
i and d would be have an equivalence relationship since they represent the same value but not equality since they have different types. Other languages may have different ideas of equivalence (such as whether two variables represent the same object).
The answers above are right / partially right but they don't explain what the difference is exactly. In theoretical computer science (and probably in other branches of maths) it has to do with quantification over free variables of the logical equation (that is when we use the two notations at once).
For me the best ways to understand the difference is:
By definition
A ≡ B
means
For all possible values of free variables in A and B, A = B
or
A ≡ B <=> [A = B]
By example
x=2x
iff (in fact iff is the same as ≡)
x=0
x ≡ 2x
iff (because it is not the case that x = 2x for all possible values of x)
False
I hope it helps
Edit:
Another thing that came to my head is the definitions of the two.
A = B is defined as A <= B and A >= B, where <= (smaller equal, not implies) can be any ordering relation
A ≡ B is defined as A <=> B (iff, if and only if, implies both sides), worth noting that implication is also an ordering relation and so it is possible (but less precise and often confusing) to use = instead of ≡.
I guess the conclusion is that when you see =, then you have to figure out the authors intention based on the context.
Take it outside the realm of programming.
(31) equal -- (having the same quantity, value, or measure as another; "on equal terms"; "all men are equal before the law")
equivalent, tantamount -- (being essentially equal to something; "it was as good as gold"; "a wish that was equivalent to a command"; "his statement was tantamount to an admission of guilt"
At least in my dictionary, 'equivelance' means its a good-enough subsitute for the original, but not necessarily identical, and likewise 'equality' conveys complete identical.
null == 0 # true , null is equivelant to 0 ( in php )
null === 0 # false, null is not equal to 0 ( in php )
( Some people use ≈ to represent nonidentical values instead )
The difference resides above all in the level at which the two concepts are introduced. '≡' is a symbol of formal logic where, given two propositions a and b, a ≡ b means (a => b AND b => a).
'=' is instead the typical example of an equivalence relation on a set, and presumes at least a theory of sets. When one defines a particular set, usually he provides it with a suitable notion of equality, which comes in the form of an equivalence relation and uses the symbol '='. For example, when you define the set Q of the rational numbers, you define equality a/b = c/d (where a/b and c/d are rational) if and only if ad = bc (where ad and bc are integers, the notion of equality for integers having already been defined elsewhere).
Sometimes you will find the informal notation f(x) ≡ g(x), where f and g are functions: It means that f and g have the same domain and that f(x) = g(x) for each x in such domain (this is again an equivalence relation). Finally, sometimes you find ≡ (or ~) as a generic symbol to denote an equivalence relation.
You could have two statements that have the same truth value (equivalent) or two statements that are the same (equality). As well the "equal sign with three bars" can also mean "is defined as."
Equality really is a special kind of equivalence relation, in fact. Consider what it means to say:
0.9999999999999999... = 1
That suggests that equality is just an equivalence relation on "string numbers" (which are defined more formally as functions from Z -> {0,...,9}). And we can see from this case, the equivalence classes are not even singletons.
The first problem is, what equality and equivalence mean in this case? Essentially, contexts are quite free to define these terms.
The general tenor I got from various definitions is: For values called equal, it should make no difference which one you read from.
The grossest example that violates this expectation is C++: x and y are said to be equal if x == y evaluates to true, and x and y are said to be equivalent if !(x < y) && !(y < x). Even apart from user-defined overloads of these operators, for floating-point numbers (float, double) those are not the same: All NaN values are equivalent to each other (in fact, equivalent to everything), but not equal to anything including themselves, and the values -0.0 and +0.0 compare equal (and equivalent) although you can distinguish them if you’re clever.
In a lot of cases, you’d need better terms to convey your intent precisely. Given two variables x and y,
identity or “the same” for expressing that there is only one object and x and y refer to it. Any change done through x is inadvertantly observable through y and vice versa. In Java, reference type variables are checked for identity using ==, in C# using the ReferenceEquals method. In C++, if x and y are references, std::addressof(x) == std::addressof(y) will do (whereas &x == &y will work most of the time, but & can be customized for user-defined types).
bitwise or structure equality for expressing that the internal representations of x and y are the same. Notice that bitwise equality breaks down when objects can reference (parts of) themselves internally. To get the intended meaning, the notion has to be refined in such cases to say: Structured the same. In D, bitwise equality is checked via is and C offers memcmp. I know of no language that has built-in structure equality testing.
indistinguishability or substitutability for expressing that values cannot be distinguished (through their public interface): If a function f takes two parameters and x and y are indistinguishable, the calls f(x, y), f(x, x), and f(y, y) always return indistinguishable values – unless f checks for identity (see bullet point above) directly or maybe by mutating the parameters. An example could be two search-trees that happen to contain indistinguishable elements, but the internal trees are layed-out differently. The internal tree layout is an implementation detail that normally cannot be observed through its public methods.
This is also called Leibniz-equality after Gottfried Wilhelm Leibniz who defined equality as the lack of differences.
equivalence for expressing that objects represent values considered essentially the same from some abstract reasoning. For an example for distinguishable equivalent values, observe that floating-point numbers have a negative zero -0.0 distinct from +0.0, and e.g. sign(1/x) is different for -0.0 and +0.0. Equivalence for floating-point numbers is checked using == in many languages with C-like syntax (aka. Algol syntax). Most object-oriented languages check equivalence of objects using an equals (or similarly named) method. C# has the IEquatable<T> interface to designate that the class has a standard/canonical/default equivalence relation defined on it. In Java, one overrides the equals method every class inherits from Object.
As you can see, the notions become increasingly vague. Checking for identity is something most languages can express. Identity and bitwise equality usually cannot be hooked by the programmer as the notions are independent from interpretations. There was a C++20 proposal, which ended up being rejected, that would have introduced the last two notions as strong† and weak equality†. († This site looks like CppReference, but is not; it is not up-to-date.) The original paper is here.
There are languages without mutation, primarily functional languages like Haskell. The difference between equality and equivalence there is less of an issue and tilts to the mathematical use of those words. (In math, generally speaking, (recursively defined) sequences are used instead of re-assignments.)
Everything C has, is also available to C++ and any language that can use C functionality. Everything said about C# is true for Visual Basic .NET and probably all languages built on the .NET framework. Analogously, Java represents the JRE languages that also include Kotlin and Scala.
If you just want stupid definitions without wisdom: An equivalence relation is a reflexive, symmetrical, and transitive binary relation on a set. Equality then is the intersection of all those equivalence relations.

Resources