Hamiltonian graph logical formula - logical-operators

Please construct a logical formula to determine whether the given graph is a Hamiltonian graph.
Please write down the meaning of the atomic propositions you define, and give a brief explanation of all the formulas

Related

Algorithm for efficient identification of bipartite graph

I'm looking for an algorithm to efficiently determine if a given graph (implemented either as an adyacency matrix or as an adyacency list — the one that makes the algorithm run faster) is bipartite or not.
I'm aware that a slight modification of BFS algorithm can suit this purpose, but I haven't been able to lower BFS's time-complexity of O(|E|+|V|).
I'd expect to find an algorithm that is a bit faster, taking advantage o the fact that a bipartite graph lacks cicles with an odd number of edges.
Does anyone know such algorithm or have any suggestion to address this problem?

Is it possible to obtain lower-precision path lengths?

I am working with a scientific package that makes heavy use of igraph's shortest path algorithm to calculate path lengths. However, for the graphs we are interested in, the matrix returned is very memory-intensive, easily scaling to 10's of Gb. Also, double precision calculations are not needed--in most cases single precision or even integer precision are enough.
I have two questions:
Is it possible to change the data type of the matrix, say from double to single precision or even integer (if we only need the number of edges)?
Is it possible to change the default value of the path length between unconnected nodes from infinity to null or some other value? (We are considering storing the matrix in a sparse matrix, but the infinity is incompatible)
I can't find any arguments or settings in the documentation that would let me do this, neither here: https://igraph.org/r/doc/distances.html nor in the low-level function documentation.
Thanks in advance!
Some tips that may help:
Using a sparse matrix would only help if most vertex pairs are unreachable from each other. This would mean that the graph has many small components. If so, decompose the graph into components, and run the shortest path length calculation separately on each component.
Do you need to store the entire matrix in memory for the next step of your calculation? Or can you use the matrix part by part? igraph makes it possible to compute shortest paths not from all sources, but only from certain sources. Process sources one-by-one (or for better performance: small group by small group) instead of all at once. igraph also supports calculating the path only to certain targets, due to how the shortest path finder works, doing the computation target-by-target won't be efficient.

What is the difference between a Metric and a Norm?

From my understanding, a metric defines a more abstract entity than a norm, but I don't feel like I truly understand. Can someone please explain it to me in layman's terms?
A norm is a concept that only makes sense when you have a vector space. It defines the notion of the magnitude of vectors and can be used to measure the distance between two vectors as the magnitude of its difference. Norms are linear in that they preserve (positive) scaling. This means that if you scale (zoom) down or up a configuration of vectors (an operation that only makes sense in a vector space), the distances between the vectors will be scaled in the same proportion.
A metric is a more general notion that can be predicated on spaces where there is no underlying algebraic structure. They incarnate the concept of distance with independence from any algebraic features (which might not even exist in these spaces). If you have a norm, you have a distance, but you can have a distance without having any sum operation or scalar action.
There is a third level of abstraction where the concept of proximity can be expressed without any distance. These are called topological spaces and their embodiment does not rely on the concept of distance (or norms) but on the concept of neighborhood.

How can I determine that Y_n can be represented as a funtion of X_n

On enter I'm having a sequence of pairs (X_n, Y_n). Consider the following two graphics of two possible sequences.
in first case X_n can be modeled as f(Y_n) while in second case it obviously has no sense. The question is how can I determine if trying to represent X_n as f(Y_n) makes sense? Probable there is some criterium or something like that?
What can be done in multivariate situation (i.e. when we're trying to represent Y as f(X_1, X_2, ..., X_k))?
Please note that trying to fit points with something graphicaly (e.g. like on first graph) and seeing if it fits the data is not OK. I'm looking for numerical criterium.
Please feel free to propose variants in either matlab or R. A link on page with algorithm will be great too!

How do I generate data from a similarity matrix?

Suppose there are 14 objects, each of which have or do not have 1000 binary features. I have a 14x14 similarity matrix, but not the raw 14x1000 data. Is there a way to reconstruct or generate something similar to the raw data, given the similarity matrix?
I tried Monte Carlo simulations, but unconstrained they would take way too much time to achieve even a low level of consistency with the original similarity matrix.
I saw this relevant question: Similarity matrix -> feature vectors algorithm?. However, they wanted to reduce not increase dimensionality. Also, I am not sure (1) which matrix or matrices to use, and (2) how to convert into a binary matrix.
It's impossible to say for sure unless you describe how the similarity scores were computed.
In general, for the usual kind of similarity scoring this is not possible: information has been lost in the transformation from individual features to aggregate statistics. The best you can hope to do is to arrive at a set of features that are consistent with the similarity scores.
I think that is what you are talking about when you say "similar to" the original. That problem is pretty interesting. Suppose similarity was computed as the dot-product of two feature vectors (ie the count of features for a pair of objects that both have value = 1/true). This is not the only choice: it is consistent with value of 0 (false) meaning no information. But it may generalize to other similarity measures.
In such a case, the problem is really a linear programming problem: a naive approach is to exhaustively search the space of possible objects - not randomly, but guided by the constraints. For example, suppose SIM(A,B) := similarity of object A and object B. Define an order on these vectors.
If SIM(A,B) = N, then choose A=B minimal (like (1,....,1 (N times), 0, .... 0 (1000-N times)), and then choose the minimum C s.t. (A,C), (B,C) have the given values. Once you find an inconsistency, backtrack, and increment.
This will find a consistent answer, although the complexity is very high (but probably better than monte carlo).
Finding a better algorithm is an interesting problem, but more than this I can't say in a SO post - that's probably a topic for a CS thesis!

Resources