I tried to read the History monoid but couldn't wrap my head around it. Could somebody please explain it in simpler terms?
Thank you
Reference: http://en.wikipedia.org/wiki/History_monoid
The history monoid is the set of possible sequences of primitive actions in the threads, taking into account synchronization primitives which occur in more than one thread simultaneously.
Actually it is not just a set but a monoid, which means that you can concatenate the sequences to get a new sequence in the monoid, and there is a neutral element, the empty sequence.
Related
From Bratko's book, Prolog Programming for Artificial Intelligence (4th Edition)
We have the following code which doesn't work -
anc4(X,Z):-
anc4(X,Y),
parent(Y,Z).
anc4(X,Z):-
parent(X,Z).
In the book, on page 55, figure 2.15, is shown that parent(Y,Z) is kept calling until stack is out of memory.
What I don't understand is that prolog does a recursiv call to anc4(X,Y), and not to parent (Y,Z) first. Why doesn't prolog goes over and over to the first line, anc4(X,Y), and rather goes to the second line?
Can you please elaborate why is the line parent(Y,Z) is kept being called?
Thank you.
The origin of your 'problem' (i.e. goals order) is deeply rooted in the basic of the language.
Prolog is based on the simple and efficient strategy of chronological backtracking, to implement SLD resolution, and left recursive clauses, like anc4/2, cause an infinite recursion.
Indeed, the comma operator (,)/2 stands for
evaluate the right expression only if the left expression holds
So, order of goals in a clause is actually an essential part of the program.
For your concrete case,
... , parent(Y,Z).
cannot be called if
anc4(X,Y),
doesn't hold.
The counterpart to goals order is clauses order.
That is, the whole program has a different semantics after the clauses are exchanged:
anc4(X,Z):-
parent(X,Z).
anc4(X,Z):-
anc4(X,Y),
parent(Y,Z).
To better understand the problem, I think it's worth to try this definition as well.
Prolog cannot handle left recursion by default without a tabling mechanism. Only some Prolog systems support tabling and usually you need to explicitly declare which predicates are tabled.
If you're using e.g. XSB, YAP, or SWI-Prolog, try adding the following directive on top of your source file containing the definition for the anc4/2 predicate:
:- table(anc4/2).
and retry your queries. The tabling mechanism detects when a query is calling a variant (*) of itself and suspends execution of that branch until a query answer is found and tries an alternative branch (provided in your case by the second clause). If that happens, execution is resumed with that answer.
(*) Variant here means that two terms are equal upon variable renaming.
This is a question from the last years exam in my Data Structures course..
So, a Queue (data structure) consisting of n elements is implemented using pointers. The question is to find the maximum and minimum number of pointers used in that data structure?
So, I don't really understand where to start with this. I know the implementation of Queue using pointers. From it, I guess we only need two pointers, one for Front and Rear, which may be the minimum.
On the other hand, the elements of the Queue are cells that contain pointers on the next element, so there should be n+1 pointers?
Would be grateful for a nice explanation of this..or at least a hint if nothing else?
For a while I was thinking that you just need a map to a monoid, and then reduce would do reduction according to monoid's multiplication.
First, this is not exactly how monoids work, and second, this is not exactly how map/reduce works in practice.
Namely, take the ubiquitous "count" example. If there's nothing to count, any map/reduce engine will return an empty dataset, not a neutral element. Bummer.
Besides, in a monoid, an operation is defined for two elements. We can easily extend it to finite sequences, or, due to associativity, to finite ordered sets. But there's no way to extend it to arbitrary "collections" unless we actually have a σ-algebra.
So, what's the theory? I tried to figure it out, but I could not; and I tried to go Google it but found nothing.
I think the right way to think about map-reduce is not as a computational paradigm in its own right, but rather as a control flow construct similar to a while loop. You can view while as a program constructor with two arguments, a predicate function and an arbitrary program. Similarly, the map-reduce construct has two arguments named map and reduce, each functions. So analogously to while, the useful questions to ask are about proving correctness of constructed programs relative to given preconditions and postconditions. And as usual, those questions involve (a) termination and run-time performance and (b) maintenance of invariants.
Do you know if are there pointers in Haskell?
If yes: how do you use them? Are there any problems with them? And why aren't they popular?
If no: is there any reason for it?
Yes there are. Take a look at Foreign.Ptr or Data.IORef
I suspect this wasn't what you are asking for though. As Haskell is for the most part without state, it means pointers don't fit into the language design. Having a pointer to memory outside the function would mean that a function is no longer pure and only allowing pointers to values within the current function is useless.
Haskell does provide pointers, via the foreign function interface extension. Look at, for example, Foreign.Storable.
Pointers are used for interoperating with C code. Not for every day Haskell programming.
If you're looking for references -- pointers to objects you wish to mutate -- there are STRef and IORef, which serve many of the same uses as pointers. However, you should rarely -- if ever -- need Refs.
If you simply wish to avoid copying large values, as sepp2k supposes, then you need do nothing: in most implementation, all non-trivial values are allocated separately on a heap and refer to one another by machine-level addresses (i.e. pointers). But again, you need do nothing about any of this, it is taken care of for you.
To answer your question about how values are passed, they are passed in whatever way the implementation sees fit: since you can't mutate the values anyway, it doesn't impact the meaning of the code (as long as the strictness is respected); usually this works out to by-need unless you're passing in e.g. Int values that the compiler can see have already been evaluated...
Pass-by-need is like pass-by-reference, except that any given reference could refer either to an actual evaluated value (which cannot be changed), or to a "thunk" for a not-yet-evaluated value. Wikipedia has more.
As an extension question my lecturer for my maths in computer science module asked us to find examples of when a surjective function is vital to the operation of a system, he said he can't think of any!
I've been doing some googling and have only found a single outdated paper about non surjective rounding functions creating some flaws in some cryptographic systems.
Master edit:
[btw, thank you for the accepted response.]
In reviewing my response, and these of others in this post, I realized two things.
The first one is the fact that in looking things at a higher level of abstraction, most (all?) of the [counter-]examples provided are a form of "discretization" function. In other words, they correspond to the ubiquitous requirement in computer systems of mapping [possibly infinitely] numerous entities/values to a set (possibly "infinite" too, though most often a finite one) of discrete entities/values. While not all such mappings imply or require a non bijective surjection, many do, hence the several examples found.
The other observation is that the most compelling examples seem to be tied to stochastic (random) processes, or to the underlying primitives which support them.
Both of these things are quite telling, I think, for it mirrors, if only loosely, the way the real world's complexity (read "randomness", at many levels) is exploited in various systems in human (and animals) to produce simplified/stable/discrete maps that represent elements of this complex reality: Another case where mathematics and its practical-oriented friend, Computer Science, team up to describe or to mimic fundamental realities (or... are these realities? hum... we're getting too philosophical...)
It could be a matter of understanding exactly the frame of the question:
do bijective functions count (they are indeed a special case of a surjection)
Edit: No, bijective functions are not considered.
has it got to be a "function" in the sense of a procedural calculation as opposed to say a "relation" as in databases
Edit: yes, a procedural function of sorts... "take in a value and return another value" (IMHO this distinction is very tenuous as any "map" is a function, regardless of the inner working, but let's entertain this "numeric calculation like" restriction in the spirit of this question)
define "vital"...
With all these caveats in mind, the following may apply:
Elementary mathematical functions such as ABS() or even ROUND(), FLOOR() (absolute value, rouding of a decimal/float value to the nearest int respectively) etc.
In the case of ABS(), for example, used in the context of a program which draws shapes on the screen, using various properties of say symmetry, would be able to count on getting two, and exactly two values to map to a a given value, and to have all values in a given integral range (say from 0 to 10), to be an ABS() value, lest the drawings will start to look funny ;-)
the Soundex function (and its many derivations)
Modulo operations, even in such trivial uses as to show the status of a process, every x items processed.
Classification processes: it is both important that there'd be a important reduction factors (thousands instances mapped to a handful of categories), and it is vital [in some cases] that all instances yield one and only one category (ex: in real-time decision systems).
Various "simple" mathematical functions used in pseudo-random number generators.
It is vital that they'd be surjective, so that a) all values within the namespace would be reacheable, indeed, expectations of a specific, often uniform, distribution is implied. (Note, could be bit of a repeat of the "modulo" example above, although it doesn't have to use modulo arithmetic proper, other math function can do)
Following is a bad example, now that Martin clarified that [math operations like functions that] "take in a value and return another value" is what defines "function" hence disqualifying database/table-driven "maps" and such. And also that bijections were not considered either.
One-to-One relations (or one-to-many relations for that matter) : it can be so important to maintain these that we require triggers etc. to keep up with referential integrity
A very simple scheduler implemented by the function random(0, number of processes - 1) expects this function to be surjective, otherwise some processes will never run.
In practice the scheduler has some sort of internal state that it modifies. If you want to see it as a function in the mathematical sense, it takes a state and returns a new state and a process number to run, and in this context it's no longer important that it is surjective because not all possible states have to be reachable. Not a very good example, I'm afraid, but the only one I can think of.
A hashing function should ideally be surjective.
But in general I think the question is too vague to be answered. What is a system? What is a function used inside a system?
Edit:
I think the question is not very meaningful. After all there are many cases where you need to be able to produce every desired result. Just think about the identity function and imagine where you could argue that it is used:
using a reference to a variable in programming
using a text (or even hex-editor) to produce a file
It would be very bad, if you could not create any bit combination by xor or not when doing bit manipulations.