query language for graph sets: data modeling question - graph

Suppose I have a set of directed graphs. I need to query those graphs. I would like to get a feeling for my best choice for the graph modeling task. So far I have these options, but please don't hesitate to suggest others:
Proprietary implementation (matrix)
and graph traversal algorithms.
RDBM and SQL option (too space consuming)
RDF and SPARQL option (too slow)
What would you guys suggest? Regards.
EDIT: Just to answer Mad's questions:
Each one is relatively small, no more than 200 vertices, 400 edges. However, there are hundreds of them.
Frequency of querying: hard to say, it's an experimental system.
Speed: not real time, but practical, say 4-5 seconds tops.

You didn't give us enough information to respond with a well thought out answer. For example: what size are these graphs? With what frequencies do you expect to query these graphs? Do you need real-time response to these queries? More information on what your application is for, what is your purpose, will be helpful.
Anyway, to counter the usual responses that suppose SQL-based DBMSes are unable to handle graphs structures effectively, I will give some references:
Graph Transformation in Relational Databases (.pdf), by G. Varro, K. Friedl, D. Varro, presented at International Workshop on Graph-Based Tools (GraBaTs) 2004;
5 Conclusion and Future Work
In the paper, we proposed a new graph transformation engine based on off-the-shelf
relational databases. After sketching the main concepts of our approach, we carried
out several test cases to evaluate our prototype implementation by comparing it to
the transformation engines of the AGG [5] and PROGRES [18] tools.
The main conclusion that can be drawn from our experiments is that relational
databases provide a promising candidate as an implementation framework for graph
transformation engines. We call attention to the fact that our promising experimental
results were obtained using a worst-case assessment method i.e. by recalculating
the views of the next rule to be applied from scratch which is still highly inefficient,
especially, for model transformations with a large number of independent matches
of the same rule. ...
They used PostgreSQL as DBMS, which is probably not particularly good at this kind of applications. You can try LucidDB and see if it is better, as I suspect.
Incremental SQL Queries (more than one paper here, you should concentrate on " Maintaining Transitive Closure of Graphs in SQL "): "
.. we showed that transitive closure, alternating paths, same generation, and other recursive queries, can be maintained in SQL if some auxiliary relations are allowed. In fact, they can all be maintained using at most auxiliary relations of arity 2. ..
Incremental Maintenance of Shortest Distance and Transitive Closure in First Order Logic and SQL.
Edit: you give more details so... I think the best way is to experiment a little with both a main-memory dedicated graph library and with a DBMS-based solution, then evaluate carefully pros and cons of both solutions.
For example: a DBMS need to be installed (if you don't use an "embeddable" DBMS like SQLite), only you know if/where your application needs to be deployed and what your users are. On the other hand, a DBMS gives you immediate benefits, like persistence (I don't know what support graph libraries gives for persisting their graphs), transactions management and countless other. Are these relevant for your application? Again, only you know.

The first option you mentioned seems best. If your graph won't have many edges (|E|=O(|V|)) then you might earn better complexity of time and space using Dictionary:
var graph = new Dictionary<Vertex, HashSet<Vertex>>();
An interesting graph library is QuickGraph. Never used it but it seems promising :)

I wrote and designed quite a few graph algorithms for various programming contests and in production code. And I noticed that every time I need one, I have to develop it from scratch, assembling together concepts from graph theory (BFS, DFS, topological sorting etc).
Perhaps a lack of experience is a reason, but it seems to me that there's still no reasonable general-purpose query language to solve graph problems. Pick a couple of general-purpose graph libraries and solve your particular task in a programming (not query!) language. That will give you best performance and space consumption, but will also require understanding of graph theory basic concepts and of their limitations.
And the last one: do not use SQL for graphs.

Related

NetworkX vs GraphDB: do they serve similar purposes? When to use one or the other and when to use them together?

I am trying to understand if I should use a GraphDB for my project. I am mapping a computer network and I use NetworkX. The relationships are physical or logical adjacency (L2 and L3) . In the current incarnation my program scans the network and dumps the adjacency info in a Postgress RDB. From there I use Python to build my graphs using NetworkX.
I am trying to understand if I should change my approach and if there is any benefit in storing the info in a GaphDB. Postgress has AgensGraph which seems to be built on top of Postgress as a GraphDB overlay or addon. I don not know yet if installing this on top will make my life easier. I barely survived the migration from SQLite to Postgress :-) and to SQLAlchemy so now in not even 3 months I am reconsidering things while I can (the migration is not complete)
I could chose to use a mix but I am not sure if it makes sense to use a GraphDB. From what I understand these has advantages as not needing a schema (which helps a lot for a DB newbie like me)
I am also wondering if NetworkX (Python librayr) and GraphDB overlap in any way. As far as I understand these things NetworkX could be instrumental in analyzing the topology of the graph while GraphDB is mainly used to query the data stored in the DB. Do they overlap in anyway? Can they be used together?
TLDR: Use Neo4j or OrientDB to store data and networkx for processing it (if you need complicated algorithms). It will be the best solution.
I strongly don't recommend you to use GraphDB for your purposes. GraphDB is based on RDF that is used for semantic web and common knowledge storage. It is not supposed to be used with problems like yours. There are many graph databases that will fit to you much better. I can recommend Neo4j (the most popular graph database, as you can see; free, but non-open-source) or OrientDB (the most popular open-source graph database).
I used graph database when I had a similar problem (but I used HP UCMDB, that is corporate software and is not free). It was really MUCH better than average relational DBs. So the idea of graph database usage is good and it fits to this kind of problems naturally.
I am not sure you really need networkx to analyze the graph (you can use graph query languages to it), but if you want, you can load the data from your DB to networkx with GraphML or some another methods (OrientDB is similar) to process it using networkx.
And the little question-answer quiz in the end:
As far as I understand these things NetworkX could be instrumental in analyzing the topology of the graph
Absolutely right.
while GraphDB is mainly used to query the data stored in the DB.
It is a database. And, yes, it is mainly used to query the data.
Do they overlap in anyway?
They are both about graphs. Of course they overlap :)
Can they be used together?
Yes, they can. No, they should not be used together for your problem.

Using Neo4j ImpermanentGraphDatabase for real life scenarios with large amounts of data

I use Neo4j to do calculations on complex graphs from data stored on a relational database, these calculations must be done frequently so the natural solution has been to use Neo4j to create impermanent neo4j graphs on the fly.
I continue to find references like the one below on the internet (Neo4j: is it a in-memory graph database?):
Neo4j features a stripped down variant called
ImpermanentGraphDatabase. This one is intended to be used for testing
only. E.g. when you develop a graph enabled application your unit
tests might use it. It is not recommended to use
ImpermanentGraphDatabase for real life scenarios with large amounts of
data.
I'm doing exactly the above, using ImpermanentGraphDatabase for a real life scenario with thousands of nodes on which I do on the fly calculations.
Creating an embedded database each time I need to do a calculation on the fly is not feasible so what solution does Neo4j offer for this scenario? what exactly happens if you use Neo4j ImpermanentGraphDatabase for real life scenarios with large amounts of data?
There is the little word 'not' in the text.
Also in the (free) book "Graph Databases" (download) impermanent server mode is only recommended for testing, NOT for productive environments. If you want to use your graph locally (no cluster), you might use the embedded mode. Keep in mind, that there are always two parts for graph dbs: storage and traversal engine. But I think, you're looking for a (cypher) queryable datastructure? It's worth to have a look at TinkerPop/Gremlin (link).
For your purpose it depends on the part that'll change. The structure of your data? The formula for calculation? Or just the values? If your graph is static, you just need to update your data (not the graph database). If your graph is dynamic you might optimize your algorithm and use different data-structures. A graph is not always the best choice, neither are trees, lists or dictionaries.

Custom Graph Partitioning algorithms in Giraph

There have been mentions of using Custom Partitioning algorithms for Giraph applications. However it is not clearly given at any place. As Castagna pointed out here in how to partition graph for pregel to maximize processing speed?, there may not be a need for such partitioning as HashPartitioner will in itself be very good in most cases.
The problem of partitioning a graph 'intelligently' in order to minimize execution time is an interesting one, however it's not simple and it depends on your data and your algorithm. You might find also that, in practice, it's not necessary and a random partitioning is sufficiently good.
For example, if you are interested in exploring Pregel-like approaches, you can have a look at Apache Giraph and experiment with different partitioning techniques.
However for the purpose of learning, it would be good to see live examples and there are none found as far as I've seen. For example, the normal k-way partitioning algorithm (Kerninghan-Lin) being executed in Giraph or atleast the direction I should implement it towards.
All the google results were from the Apache giraph page where there are only definitions of the functions and various options to use them.

Graph data structures in LabVIEW

What's the best way to represent graph data structures in LabVIEW?
I'm doing some basic algorithm review over the holiday, and I'd prefer to not implement all of the storage and traversals myself, if possible.
(I'm aware that there was a thread a few years ago on LAVA, is that my best bet?)
I've never had a need to do this myself, so I never really looked into it, but there are some people who did do some work as far I know.
Brian K. has posted something over here, although it's been a long time since I looked at it:
https://decibel.ni.com/content/docs/DOC-12668
If that doesn't help, I would suggest you read this and then try sending a PM to Daklu there, as he's the most likely candidate to have something.
https://decibel.ni.com/content/thread/8179?tstart=0
If not, I would suggest posting a question on LAVA, as you're more likely to find the relevant people there.
Well you don't have that many options for graphs , from a simple point of view. It really depends on the types of algorithms you are doing, in order to choose the most convenient representation.
Adjacency matrix is simple, but can be slow for some tasks, and can be wasteful if the graph is not dense.
You can keep a couple of lists and hash maps of your edges and vertices. With each edge or vertex created assigned a unique index into the list,it's pretty simple to keep things under control. Each vertex could then be associated with a list of its neighbors. Depending on your needs you could divide that neighbors list into in and out edges. Also depending on your look up needs, you could choose to index edges by their in or out edge or both, or simple by a unique index number.
I had a glance at the LabView quick reference, and while it was not obvious from there how you would do that, as long as they have arrays of some sort, you can implement a graph. I'm sure you'll be fine.

Modelling / documenting functional programs

I've found UML useful for documenting various aspects of OO systems, particularly class diagrams for overall architecture and sequence diagrams to illustrate particular routines. I'd like to do the same kind of thing for my clojure applications. I'm not currently interested in Model Driven Development, simply on communicating how applications work.
Is UML a common / reasonable approach to modelling functional programming? Is there a better alternative to UML for FP?
the "many functions on a single data structure" approach of idiomatic Clojure code waters down the typical "this uses that" UML diagram because many of the functions end up pointing at map/reduce/filter.
I get the impression that because Clojure is a somewhat more data centric language a way of visualizing the flow of data could help more than a way of visualizing control flow when you take lazy evaluation into account. It would be really useful to get a "pipe line" diagram of the functions that build sequences.
map and reduce etc would turn these into trees
Most functional programmers prefer types to diagrams. (I mean types very broadly speaking, to include such things as Caml "module types", SML "signatures", and PLT Scheme "units".) To communicate how a large application works, I suggest three things:
Give the type of each module. Since you are using Clojure you may want to check out the "Units" language invented by Matthew Flatt and Matthias Felleisen. The idea is to document the types and the operations that the module depends on and that the module provides.
Give the import dependencies of the interfaces. Here a diagram can be useful; in many cases you can create a diagram automatically using dot. This has the advantage that the diagram always accurately reflects the code.
For some systems you may want to talk about important dependencies of implementations. But usually not—the point of separating interfaces from implementations is that the implementations can be understood only in terms of the interfaces they depend on.
There was recently a related question on architectural thinking in functional languages.
It's an interesting question (I've upvoted it), I expect you'll get at least as many opinions as you do responses. Here's my contribution:
What do you want to represent on your diagrams? In OO one answer to that question might be, considering class diagrams, state (or attributes if you prefer) and methods. So, obviously I would suggest, class diagrams are not the right thing to start from since functions have no state and, generally, implement one function (aka method). Do any of the other UML diagrams provide a better starting point for your thinking? The answer is probably yes but you need to consider what you want to show and find that starting point yourself.
Once you've written a (sub-)system in a functional language, then you have a (UML) component to represent on the standard sorts of diagram, but perhaps that is too high-level, too abstract, for you.
When I write functional programs, which is not a lot I admit, I tend to document functions as I would document mathematical functions (I work in scientific computing, lots of maths knocking around so this is quite natural for me). For each function I write:
an ID;
sometimes, a description;
a specification of the domain;
a specification of the co-domain;
a statement of the rule, ie the operation that the function performs;
sometimes I write post-conditions too though these are usually adequately specified by the co-domain and rule.
I use LaTeX for this, it's good for mathematical notation, but any other reasonably flexible text or word processor would do. As for diagrams, no not so much. But that's probably a reflection of the primitive state of the design of the systems I program functionally. Most of my computing is done on arrays of floating-point numbers, so most of my functions are very easy to compose ad-hoc and the structuring of a system is very loose. I imagine a diagram which showed functions as nodes and inputs/outputs as edges between nodes -- in my case there would be edges between each pair of nodes in most cases. I'm not sure drawing such a diagram would help me at all.
I seem to be coming down on the side of telling you no, UML is not a reasonable way of modelling functional systems. Whether it's common SO will tell us.
This is something I've been trying to experiment with also, and after a few years of programming in Ruby I was used to class/object modeling. In the end I think the types of designs I create for Clojure libraries are actually pretty similar to what I would do for a large C program.
Start by doing an outline of the domain model. List the main pieces of data being moved around the primary functions being performed on this data. I write these in my notebook and a lot of the time it will be just a name with 3-5 bullet points underneath it. This outline will probably be a good approximation of your initial namespaces, and it should point out some of the key high level interfaces.
If it seems pretty straight forward then I'll create empty functions for the high level interface, and just start filling them in. Typically each high level function will require a couple support functions, and as you build up the whole interface you will find opportunities for sharing more code, so you refactor as you go.
If it seems like a more difficult problem then I'll start diagramming out the structure of the data and the flow of key functions. Often times the diagram and conceptual model that makes the most sense will depend on the type of abstractions you choose to use in a specific design. For example if you use a dataflow library for a Swing GUI then using a dependency graph would make sense, but if you are writing a server to processing relational database queries then you might want to diagram pools of agents and pipelines for processing tuples. I think these kinds of models and diagrams are also much more descriptive in terms of conveying to another developer how a program is architected. They show more of the functional connectivity between aspects of your system, rather than the pretty non-specific information conveyed by something like UML.

Resources