I've been banging my head at this question for sometime and I can't figure it out. I've read the definition of free variable "Free variables and bound variables" from wikipedia and several books but I can't get the answer right
Consider the following code:
local A B C=1 D=2 in
A = 1
proc {Add E F G}
E = A + D + F
end
end
Which of these identifiers (A, B, C, D, E, F, G) are free identifiers?
The concept of free identifier comes always with a context. If you consider only the statement E=A+D+F the four identifiers are free. But if you consider the procedure definition, E and F are now bound because they are formal parameters. So the free identifiers are A and D. Finally if you consider the entire code you give, there is no free identifier since all identifiers are declared.
Reference : Concepts, Techniques and Models of Computer Programming by Peter Van Roy and Seif Haridi.
The end of page 57 and the page 58 are interesting for this matter.
The first three chapters are available on the edX platform if you enroll the course Paradigms of Computer Programming
The free identifiers of any instruction are those indentifier occurrences inside the instruction that correspond to declarations outside the instruction.
So it seams that A and D was the answer.
Related
In functional composition g compose f what terms are used to refer to and differentiate the ordering property of functional arguments f and g passed to composition operator compose? For example, given the following compositions
val reverse = (s: String) => s.reverse
val dropThree = (s: String) => s.drop(3)
(reverse compose dropThree)("Make it so!") // ==> !os ti e: java.lang.String
(dropThree compose reverse)("Make it so!") // ==> ti ekaM: java.lang.String
what terminology makes explicit reverse comes after in
reverse compose dropThree
whilst it comes first in
dropThree compose reverse
Over at Math SE they seem to think such precise terminology has not yet emerged
...I'd aim for an analogy with division or subtraction: you might, for
instance, call the outer function the composer and the inner the
composand. Words like this are not (as far as I know) in common use,
perhaps because the need for them hasn't arisen as often as those for
the elementary arithmetic operations.
however, in software engineering world, composition, chaining, pipelining, etc. seems ubiquitous, and is bread-and-butter of functional programming, thus there ought to exist precise terminology characterising the crucial ordering property of operands involved in composition.
Note the terms the question is after refer specifically to particular arguments of composition, not the whole expression, akin to divisor and dividend which precisely describe which is which in division.
I don't think there's any formal terminology, but I would express a ∘ b (λx.a(b(x))) as
composition of a and b
a composed with b
a composed onto b
a chained onto b
a after b
b before a
x pipelined through b and a
As for the designations of the arguments to the compose function, I guess you're left with the Math.SE answer.
I'm working on a problem from the Languages and Machines: An Introduction to the Theory of Computer Science (3rd Edition) in Chapter 2 Example 6.
I need help finding the answer of:
Recursive definition of set strings over {a,b} that contains one b and even number of a's before the first b?
When looking for a recursive definition, start by identifying the base cases and then look for the recursive steps - like you're doing induction. What are the smallest strings in this language? Well, any string must have a b. Is b alone a string in the language? Why yes it is, since there are zero as that come before it and zero is an even number.
Rule 1: b is in L.
Now, given any string in the language, how can we get more strings? Well, we can apparently add any number of as to the end of the string and get another string in the language. In fact, we can get all such strings from b if we simply allow you to add one more a to the end of a string in the language. From x in L, we therefore recover xa, xaa, ..., xa^n, ... = xa*.
Rule 2: if x is in L then xa is in L.
Finally, what can we do to the beginning of strings in our language? The number of as must be even. So far, rules 1 and 2 only allow us to construct strings that have zero as before the b. We should be able to get two, four, six, etc., all the even numbers, of as. A rule that lets us add two as to any string in our language will let us add ever more as to the beginning while maintaining the evenness property we require. Starting with x in L, we recover aax, aaaax, ..., (aa)^(2n)x, ... = (aa)*x.
Rule 3: if x is in L, then aax is in L.
Optionally, you may add the sometimes implicitly understood rule that only those things allowed by the aforementioned rules are allowed. Otherwise, technically anything is allowed since we haven't explicitly disallowed anything yet.
Rule 4: Nothing is in L unless by virtue of some combination of the rules 1, 2 and/or 3 above.
I am working on some security stuff and am trying to implement a basic form of RSA encryption. I am working with Maple to compute some values, but I am struggle with being able to compute this:
These are the values I have: e, p, q
I need to compute which value for 'd' will work in the following equation:
d*e ≡ 1 mod (p-1)*(q-1)
Notation note: If a - b is a multiple of the number c, we write "a ≡ b mod c".
I was told I could use some sort of Power(a,b) mod c functionality in Maple, but I am not sure how to do it. Shed any light on how I can calculate the a value for 'd' in Maple? In my case, e = 65537, and both p and q are really large prime numbers (100+ digits each).
It's as simple as d:= 1/e mod (p-1)*(q-1);
You need the gcdex function, compute s and t such that s*e + t*(p-1)*(q-1) = 1 and use d=s.
I feel bad not pointing this out: If this has anything at all to do with actual security (as opposed to learning about the theory), do not write your own code without spending a lot of time reading about attacks on implementations (as opposed to the math). RSA is very simple (and beautiful) mathematically, but surprisingly tricky to implement securely.
Note that there is a special StackExchange site for security, which you may be interested in.
May anyone give me an example how we can improve our code reusability using algebraic structures like groups, monoids and rings? (or how can i make use of these kind of structures in programming, knowing at least that i didn't learn all that theory in highschool for nothing).
I heard this is possible but i can't figure out a way applying them in programming and genereally applying hardcore mathematics in programming.
It is not really the mathematical stuff that helps as is the mathematical thinking. Abstraction is the key in programming. Transforming real live concepts into numbers and relations is what we do every day. Algebra is the mother of all, algebra is the set of rules that defines correctness, it is the highest level of abstraction, so, understanding algebra means you can think more clear, more faster, more efficient. Commencing from Sets theory to Category Theory, Domain Theory etc everything comes from practical challenges, abstraction and generalization requirements.
In common practice you will not need to actually know these, although if you are thinking of developing stuff like AI Agents, programming languages, fundamental concepts and tools then they are a must.
In functional programming, esp. Haskell, it's common to structure programs that transform states as monads. Doing so means you can reuse generic algorithms on monads in very different programs.
The C++ standard template library features the concept of a monoid. The idea is again that generic algorithms may require an operation to satisfies the axioms of monoids for their correctness.
E.g., if we can prove the type T we're operating on (numbers, string, whatever) is closed under the operation, we know we won't have to check for certain errors; we always get a valid T back. If we can prove that the operation is associative (x * (y * z) = (x * y) * z), then we can reuse the fork-join architecture; a simple but way of parallel programming implemented in various libraries.
Computer science seems to get a lot of milage out of category theory these days. You get monads, monoids, functors -- an entire bestiary of mathematical entities that are being used to improve code reusability, harnessing the abstraction of abstract mathematics.
Lists are free monoids with one generator, binary trees are groups. You have either the finite or infinite variant.
Starting points:
http://en.wikipedia.org/wiki/Algebraic_data_type
http://en.wikipedia.org/wiki/Initial_algebra
http://en.wikipedia.org/wiki/F-algebra
You may want to learn category theory, and the way category theory approaches algebraic structures: it is exactly the way functional programming languages approach data structures, at least shapewise.
Example: the type Tree A is
Tree A = () | Tree A | Tree A * Tree A
which reads as the existence of a isomorphism (*) (I set G = Tree A)
1 + G + G x G -> G
which is the same as a group structure
phi : 1 + G + G x G -> G
() € 1 -> e
x € G -> x^(-1)
(x, y) € G x G -> x * y
Indeed, binary trees can represent expressions, and they form an algebraic structure. An element of G reads as either the identity, an inverse of an element or the product of two elements. A binary tree is either a leaf, a single tree or a pair of trees. Note the similarity in shape.
(*) as well as a universal property, but they are two of them (finite trees or infinite lazy trees) so I won't dwelve into details.
As I had no idea this stuff existed in the computer science world, please disregard this answer ;)
I don't think the two fields (no pun intended) have any overlap. Rings/fields/groups deal with mathematical objects. Consider a part of the definition of a field:
For every a in F, there exists an element −a in F, such that a + (−a) = 0. Similarly, for any a in F other than 0, there exists an element a^−1 in F, such that a · a^−1 = 1. (The elements a + (−b) and a · b^−1 are also denoted a − b and a/b, respectively.) In other words, subtraction and division operations exist.
What the heck does this mean in terms of programming? I surely can't have an additive inverse of a list object in Python (well, I could just destroy the object, but that is like the multiplicative inverse. I guess you could get somewhere trying to define a Python-ring, but it just won't work out in the end). Don't even think about dividing lists...
As for code readability, I have absolutely no idea how this can even be applied, so this application is irrelevant.
This is my interpretation, but being a mathematics major probably makes me blind to other terminology from different fields (you know which one I'm talking about).
Monoids are ubiquitous in programming. In some programming languages, eg. Haskell, we can make monoids explicit http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-uses.html
Basically, I know how to create graph data structures and use Dijkstra's algorithm in programming languages where side effects are allowed. Typically, graph algorithms use a structure to mark certain nodes as 'visited', but this has side effects, which I'm trying to avoid.
I can think of one way to implement this in a functional language, but it basically requires passing around large amounts of state to different functions, and I'm wondering if there is a more space-efficient solution.
You might check out how Martin Erwig's Haskell functional graph library does things. For instance, its shortest-path functions are all pure, and you can see the source code for how it's implemented.
Another option, like fmark mentioned, is to use an abstraction which allows you to implement pure functions in terms of state. He mentions the State monad (which is available in both lazy and strict varieties). Another option, if you're working in the GHC Haskell compiler/interpreter (or, I think, any Haskell implementation which supports rank-2 types), another option is the ST monad, which allows you to write pure functions which deal with mutable variables internally.
If you were using haskell, the only functional language with which I am familiar, I would recommend using the State monad. The State monad is an abstraction for a function that takes a state and returns an intermediate value and some new state value. This is considered idiomatic haskell for those situations where maintaining a large state is necessary.
It is a much nicer alternative to the naive "return state as a function result and pass it as a parameter" idiom that is emphasized in beginner functional programming tutorials. I imagine most functional programming languages have a similar construct.
I just keep the visited set as a set and pass it as a parameter. There are efficient log-time implementations of sets of any ordered type and extra-efficient sets of integers.
To represent a graph I use adjacency lists, or I'll use a finite map that maps each node to a list of its successors. It depends what I want to do.
Rather than Abelson and Sussman, I recommend Chris Okasaki's Purely Functional Data Structures. I've linked to Chris's dissertation, but if you have the money, he expanded it into an excellent book.
Just for grins, here's a slightly scary reverse postorder depth-first search done in continuation-passing style in Haskell. This is straight out of the Hoopl optimizer library:
postorder_dfs_from_except :: forall block e . (NonLocal block, LabelsPtr e)
=> LabelMap (block C C) -> e -> LabelSet -> [block C C]
postorder_dfs_from_except blocks b visited =
vchildren (get_children b) (\acc _visited -> acc) [] visited
where
vnode :: block C C -> ([block C C] -> LabelSet -> a)
-> ([block C C] -> LabelSet -> a)
vnode block cont acc visited =
if setMember id visited then
cont acc visited
else
let cont' acc visited = cont (block:acc) visited in
vchildren (get_children block) cont' acc (setInsert id visited)
where id = entryLabel block
vchildren bs cont acc visited = next bs acc visited
where next children acc visited =
case children of [] -> cont acc visited
(b:bs) -> vnode b (next bs) acc visited
get_children block = foldr add_id [] $ targetLabels bloc
add_id id rst = case lookupFact id blocks of
Just b -> b : rst
Nothing -> rst
Here is a Swift example. You might find this a bit more readable. The variables are actually descriptively named, unlike the super cryptic Haskell examples.
https://github.com/gistya/Functional-Swift-Graph
Most functional languages support inner functions. So you can just create your graph representation in the outermost layer and just reference it from the inner function.
This book covers it extensively http://www.amazon.com/gp/product/0262510871/ref=pd_lpo_k2_dp_sr_1?ie=UTF8&cloe_id=aa7c71b1-f0f7-4fca-8003-525e801b8d46&attrMsgId=LPWidget-A1&pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0262011530&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=114DJE8K5BG75B86E1QS
I would love to hear about some really clever technique, but I think there are two fundamental approaches:
Modify some global state object. i.e. side-effects
Pass the graph as an argument to your functions with the return value being the modified graph. I assume this is your approach of "passing around large amounts of state"
That is what's done in functional programming. If the compiler/interpreter is any good, it will help manage memory for you. In particular, you'll want to make sure that you use tail recursion, if you happen to recurse in any of your functions.