Suppose I have two finite posets (e.g. constructed with sage.combinat.posets.posets.FinitePoset).
I want to calculate the binary relation which is the composition of the order relations of these posets.
How to do this in Sage?
(I am a Sage novice.)
Not yet, apparently. See Trac 24542 for a general future implementation of binary relations (which is what you'd likely need, since this sort of composition of posets probably usually threatens not to be a poset?).
Related
In traditional Simplex Algorithm notation, we have x at the current basis selection B as so:
xB = AB-1b - AB-1ANxN. How can I compute the AB-1AN term inside a separator in SCIP, or at least iterate over its columns?
I see three helpful methods: getLPColsData, getLPRowsData, getLPBasisInd. I'm just not sure exactly what data those methods represent, particularly the last one, with its negative row indexes. How do I use those to get the value I want?
Do those methods return the same data no matter what LP algorithm is used? Or do I need to account for dual vs primal? How does the use of the "revised" algorithm play into my calculation?
Update: I discovered the getLPBInvARow and getLPBInvRow. That seems to be much closer to what I'm after. I don't yet understand their results; they seem to include more/less dimensions than expected. I'm still looking for understanding at how to use them to get the rays away from the corner.
you are correct that getLPBInvRow or getLPBInvARow are the methods you want. getLPBInvARow directly returns you a of the simplex tableau, but it is not more efficient to use than getLPBInvRow and doing the multiplication yourself since the LP solver needs to also compute the actual tableau first.
I suggest you look into either sepa_gomory.c or sepa_gmi.c for examples of how to use these methods. How do they include less dimensions than expected? They both return sparse vectors.
I'm writing a bunch of recursive graph algorithms where graph nodes have parents, children, and a number of other properties. The algorithms can also create nodes dynamically, and make use of recursive functions.
What are the right data structures to use in this case? In C++ I would've implemented this via pointers (i.e. each node has a vector<Node*> parents, vector<Node*> children), but I'm not sure if Julia pointers are the right tool for that, or if there's something else ... ?
In Julia state-of-the-art with this regard is the LightGraphs.jl library.
It uses adjacency lists for graph representation and assumes that the data for nodes is being kept outside the graph (for example in Vectors indexed by node identifiers) rather than inside the graph.
This approach is generally most efficient and most convenient (operating Array indices rather than references).
LightGraphs.jl provides implementation for several typical graph algorithms and is usually the way to go when doing computation on graphs.
However, LightGraphs.jl,'s approach might be less convenient in scenarios where you are continuously at the same time adding and destroying many nodes within the graph.
Now, regarding an equivalent of the C++ approach you have proposed it can be accomplished as
struct MyNode{T}
data::T
children::Vector{MyNode}
parents::Vector{MyNode}
MyNode(data::T,children=MyNode[],parents=MyNode[]) where T= new{T}(data,children,parents)
end
And this API can be used as:
node1 = MyNode(nothing)
push!(node1.parents, MyNode("hello2"))
Finally, since LightGraphs.jl is a Julia standard it is usually worth to provide some bridging implementation so your API is able to use LightGraphs.jl functions.
For illustration how it can be done for an example have a look for SimpleHypergraphs.jl library.
EDIT:
Normally, for efficiency reasons you will want the data field to be be homogenous across the graph, in that case better is:
struct MyNode{T}
data::T
children::Vector{MyNode{T}}
parents::Vector{MyNode{T}}
MyNode(data::T,children=MyNode{T}[],parents=MyNode{T}[]) where T= new{T}(data,children,parents)
end
In an Isabelle formalization, I’m representing relations by binary predicates. I would like to have operators that perform typical relation operations like composition and inversion using this representation.
The document “What’s in Main” only mentions such operators for the representation by sets of pairs. The Relation theory says at the beginning, “Relations – as sets of pairs, and binary predicates”. However, I couldn’t find much support for the binary predicate representation in this theory. All I found were several lemmas with a mysterious pred_set_conv attribute.
Is there extensive support for relations represented by binary predicates? In particular, are there operators for common relation operations defined? Where are these things documented?
The support for relations as sets of pairs is slightly better developed than for binary predicates, but quite a lot is available. Many relation operations, however, are instances of more general operations on functions and predicates or they are indeed obtained using pred_set_conv. They may therefore be quite hard to find. Use the find_theorems command or panel to find the lemmas. Here is a brief summary of the usual operations:
Composition: relcompp (infix OO)
Inverse: conversep (notation _\<inverse>\<inverse>)
(Reflexive) transitive closure: tranclp and rtranclp
Intersection: inf
Union: sup
Inclusion: op <= (I find the lemmas predicate2I and predicate2D particularly useful)
Graph of a function restricted to a domain: BNF_Def.Grp
Inverse image under two functions: BNF_Def.vimage2p
Well-foundedness and accessibility: wfP and accp
Say I have an set of string:
x=c("a1","b2","c3","d4")
If I have a set of rules that must be met:
if "a1" and "b2" are together in group, then "c3" cannot be in that group.
if "d4" and "a1" are together in a group, then "b2" cannot be in that group.
I was wondering what sort of efficient algorithm are suitable for generating all combinations that meet those rules? What research or papers or anything talk about these type of constrained combination generation problems?
In the above problem, assume its combn(x,3)
I don't know anything about R, so I'll just address the theoretical aspect of this question.
First, the constraints are really boolean predicates of the form "a1 ^ b2 -> ¬c3" and so on. That means that all valid combinations can be represented by one binary decision diagram, which can be created by taking each of the constraints and ANDing them together. In theory you might make an exponentially large BDD that way (that usually doesn't happen, but depends on the structure of the problem), but that would mean that you can't really list all combinations anyway, so it's probably not too bad.
For example the BDD generated for those two constraints would be (I think - not tested - just to give an idea)
But since this is really about a family of sets, a ZDD probably works even better. The difference, roughly, between a BDD and a ZDD is that a BDD compresses nodes that have equal sub-trees (in the total tree of all possibilities), while the ZDD compresses nodes where the solid edge (ie "set this variable to 1") goes to False. Both re-use equal sub-trees and thus form a DAG.
The ZDD of the example would be (again not tested)
I find ZDDs a bit easier to manipulate in code, because any time a variable can be set, it will appear in the ZDD. In contrast, in a BDD, "skipped" nodes have to be detected, including "between the last node and the leaf", so for a BDD you have to keep track of your universe. For a ZDD, most operations are independent of the universe (except complement, which is rarely needed in the family-of-sets scenario). A downside is that you have to be aware of the universe when constructing the constraints, because they have to contain "don't care" paths for all the variables not mentioned in the constraint.
You can find more information about both BDDs and ZDDs in The Art of Computer Programming volume 4A, chapter 7.1.4, there is an old version available for free here.
These methods are in particular nice to represent large numbers of such combinations, and to manipulate them somehow before generating all the possibilities. So this will also work when there are many items and many constraints (such that the final count of combinations is not too large), (usually) without creating intermediate results of exponential size.
While I was reading about lambda calculus, came across the word Lambda definability. Can someone please explain what that is as I couldn't find any good resources on that.
Thanks
More generally, there is a line of research seeking to characterize "lambda definability" over a broad class of languages. "lambda definability" itself is typically relative to a semantics of a language given in terms of sets. For a type T in our language, write |T| for its interpretation as a set. Now, take an element of |T| -- call it e. We want to know if there is a term in our language -- call it x : T (x of type T), such that |x| is e. If there is such a term, then we say that t is lambda-definable.
Now, in our perfect world, when we interpret a language into sets, we would like to say that the sets associated with each type are precisely those that contain the lambda-definable elements of that type and only the lambda-definable elements (completeness). It would also be nice, perhaps to say that we can provide an algorithm to determine if a claimed element of a set has an associated lambda term (decidability).
Now, often we don't just model into sets, but into other funny mathematical constructions. And we don't model just from the lambda calculus, but from other related systems such as Plotkin's PCF or the like. But the property under study is typically still called "lambda-definability".
After decades of research there are still many open problems and questions in this regard -- while certain lower-order terms have been shown to have decidable lambda-definability (the classic results involve terms up to second-order), many terms do not yield so easily. This paper ("The Undecidability of lambda-Definability" by Ralph Loader) gives an important such undecidability result and characterizes some consequences: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.6860
See the Church-Turing thesis, where lambda-definable functions (from Church) are those that give us "effectively computable" functions. Turing showed that programs implementable on a Turing machine are equivalent to lambda-definable functions.